Sample records for small approximation errors

  1. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  2. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  3. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  4. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  5. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  6. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  7. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  8. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  9. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  10. Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.

    PubMed

    Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R

    2016-07-12

    In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.

  11. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  12. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Small Atomic Orbital Basis Set First‐Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

    PubMed Central

    Sure, Rebecca; Brandenburg, Jan Gerit

    2015-01-01

    Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221

  14. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  15. The effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1973-01-01

    The analysis of ion data from retarding potential analyzers (RPA's) is generally done under the planar approximation, which assumes that the grid transparency is constant with angle of incidence and that all ions reaching the plane of the collectors are collected. These approximations are not valid for situations in which the ion thermal velocity is comparable to the vehicle velocity, causing ions to enter the RPA with high average transverse velocity. To investigate these effects, the current-voltage curves for H+ at 4000 K were calculated, taking into account the finite collector size and the variation of grid transparency with angle. These curves are then analyzed under the planar approximation. The results show that only small errors in temperature and density are introduced for an RPA with typical dimensions; and that even when the density error is substantial for non-typical dimensions, the temperature error remains minimal.

  16. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  17. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  18. Quantified choice of root-mean-square errors of approximation for evaluation and power analysis of small differences between structural equation models.

    PubMed

    Li, Libo; Bentler, Peter M

    2011-06-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of approximation (RMSEA) pairs. In this article, we develop a new method that quantifies those chosen RMSEA pairs and allows a quantitative comparison of them. Our method proposes the use of single RMSEA values to replace the choice of RMSEA pairs for model comparison and power analysis, thus avoiding the differential meaning of the chosen RMSEA pairs inherent in the approach of MacCallum et al. (2006). With this choice, the conventional cutoff values in model overall evaluation can directly be transferred and applied to the evaluation and power analysis of model differences. © 2011 American Psychological Association

  19. Approximate symmetries of Hamiltonians

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  20. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent, Paul R.; Krogel, Jaron T.

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  2. Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.

    PubMed

    Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai

    2014-12-18

    A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.

  3. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  4. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  5. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  6. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  7. A study of hypersonic small-disturbance theory

    NASA Technical Reports Server (NTRS)

    Van Dyke, Milton D

    1954-01-01

    A systematic study is made of the approximate inviscid theory of thin bodies moving at such high supersonic speeds that nonlinearity is an essential feature of the equations of flow. The first-order small-disturbance equations are derived for three-dimensional motions involving shock waves, and estimates are obtained for the order of error involved in the approximation. The hypersonic similarity rule of Tsien and Hayes, and Hayes' unsteady analogy appear in the course of the development. It is shown that the hypersonic theory can be interpreted so that it applies also in the range of linearized supersonic flow theory. Several examples are solved according to the small-disturbance theory, and compared with the full solutions when available.

  8. Determination of the optical properties of semi-infinite turbid media from frequency-domain reflectance close to the source.

    PubMed

    Kienle, A; Patterson, M S

    1997-09-01

    We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.

  9. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  10. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  11. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  12. Evaluation and attribution of OCO-2 XCO2 uncertainties

    NASA Astrophysics Data System (ADS)

    Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin

    2017-07-01

    Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.

  13. Talar dome detection and its geometric approximation in CT: Sphere, cylinder or bi-truncated cone?

    PubMed

    Huang, Junbin; Liu, He; Wang, Defeng; Griffith, James F; Shi, Lin

    2017-04-01

    The purpose of our study is to give a relatively objective definition of talar dome and its shape approximations to sphere (SPH), cylinder (CLD) and bi-truncated cone (BTC). The "talar dome" is well-defined with the improved Dijkstra's algorithm, considering the Euclidean distance and surface curvature. The geometric similarity between talar dome and ideal shapes, namely SPH, CLD and BTC, is quantified. 50 unilateral CT datasets from 50 subjects with no pathological morphometry of tali were included in the experiments and statistical analyses were carried out based on the approximation error. The similarity between talar dome and BTC was more prominent, with smaller mean, standard deviation, maximum and median of the approximation error (0.36±0.07mm, 0.32±0.06mm, 2.24±0.47mm and 0.28±0.06mm) compare with fitting to SPH and CLD. In addition, there were significant differences between the fitting error of each pair of models in terms of the 4 measurements (p-values<0.05). The linear regression analyses demonstrated high correlation between CLD and BTC approximations (R 2 =0.55 for median, R 2 >0.7 for others). Color maps representing fitting error indicated that fitting error mainly occurred on the marginal regions of talar dome for SPH and CLD fittings, while that of BTC was small for the whole talar dome. The successful restoration of ankle functions in displacement surgery highly depends on the comprehensive understanding of the talus. The talar dome surface could be well-defined in a computational way and compared to SPH and CLD, the talar dome reflects outstanding similarity with BTC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Brorfelde Schmidt CCD Catalog (BSCC)

    DTIC Science & Technology

    2010-06-23

    reference stars. Errors of individual positions are about 20 to 200 mas for stars in the R = 10 to 18 mag range. External comparisons with 2MASS and SDSS...reveal possible small systematic errors in the BSCC of up to about 30 mas. The catalog is supplemented with J, H, and Ks magnitudes from the 2MASS ...Survey ( 2MASS ) near-infrared photometry added to the catalog (2). The fil- ters used at the Brorfelde Schmidt for this project are approximating the

  15. The Mars Exploration Rover (MER) Transverse Impulse Rocket System (TIRS)

    NASA Technical Reports Server (NTRS)

    SanMartin, Alejandro Miguel; Bailey, Erik

    2005-01-01

    In a very short period of time the MER project successfully developed and tested a system, TIRS/DIMES, to improve the probability of success in the presence of large Martian winds. The successful development of TIRS/DIMES played a big role in the landing site selection process by enabling the landing of Spirit on Gusev crater, a site of very high scientific interest but with known high wind conditions. The performance of TIRS by Spirit at Gusev Crater was excellent. The velocity prediction error was small and Big TIRS was fired reducing the impact horizontal velocity from approximately 23 meters per second to approximately 11 meters per second, well within the airbag capabilities. The performance of TIRS by Opportunity at Meridiani was good. The velocity prediction error was rather large (approximately 6 meters per second, a less than 2 sigma value, but TIRS did not fire which was the correct action.

  16. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  17. Analytic approximations to the modon dispersion relation. [in oceanography

    NASA Technical Reports Server (NTRS)

    Boyd, J. P.

    1981-01-01

    Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.

  18. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  19. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  20. Finite element modeling of light propagation in fruit under illumination of continuous-wave beam

    USDA-ARS?s Scientific Manuscript database

    Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...

  1. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  2. Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.

    1994-01-01

    Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.

  3. Analysis of the PLL phase error in presence of simulated ionospheric scintillation events

    NASA Astrophysics Data System (ADS)

    Forte, B.

    2012-01-01

    The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).

  4. Comments on the Voigt function implementation in the Astropy and SpectraPlot.com packages

    NASA Astrophysics Data System (ADS)

    Schreier, Franz

    2018-07-01

    The Voigt profile is important for spectroscopy, astrophysics, and many other fields of physics, but is notoriously difficult to compute. McLean et al. [1] [J. Electron Spectrosc. & Relat. Phenom., 1994] have proposed an approximation using a sum of Lorentzians. Our assessment indicates that this algorithm has significant errors for small arguments. After a brief survey of the requirements for spectroscopy we give a short list of both efficient and accurate codes and recommend implementations based on rational approximations.

  5. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  6. Finite element modeling of light propagation in turbid media under illumination of a continuous-wave beam

    USDA-ARS?s Scientific Manuscript database

    Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...

  7. Dichrometer errors resulting from large signals or improper modulator phasing.

    PubMed

    Sutherland, John C

    2012-09-01

    A single-beam spectrometer equipped with a photoelastic modulator can be configured to measure a number of different parameters useful in characterizing chemical and biochemical materials including natural and magnetic circular dichroism, linear dichroism, natural and magnetic fluorescence-detected circular dichroism, and fluorescence polarization anisotropy as well as total absorption and fluorescence. The derivations of the mathematical expressions used to extract these parameters from ultraviolet, visible, and near-infrared light-induced electronic signals in a dichrometer assume that the dichroic signals are sufficiently small that certain mathematical approximations will not introduce significant errors. This article quantifies errors resulting from these assumptions as a function of the magnitude of the dichroic signals. In the case of linear dichroism, improper modulator programming can result in errors greater than those resulting from the assumption of small signal size, whereas for fluorescence polarization anisotropy, improper modulator phase alone gives incorrect results. Modulator phase can also impact the values of total absorbance recorded simultaneously with linear dichroism and total fluorescence. Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company.

  8. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  9. Double power series method for approximating cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.; Malik, Karim A.

    2017-04-01

    We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.

  10. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  11. Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.

    PubMed

    Gordon, H R; Brown, J W; Evans, R H

    1988-03-01

    For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.

  12. Flight assessment of an atmospheric turbulence measurement system with emphasis on long wavelengths

    NASA Technical Reports Server (NTRS)

    Rhyne, R. H.

    1976-01-01

    A flight assessment has been made of a system for measuring the three components of atmospheric turbulence in the frequency range associated with airplane motions (0 to approximately 0.5 Hz). Results of the assessment indicate acceptable accuracy of the resulting time histories and power spectra. Small residual errors at the airplane short period and Dutch roll frequencies (0.5 and 0.25 Hz, respectively), as determined from in-flight maneuvers in smooth air, would not be detectable on the power spectra. However, errors at approximately 0.25 Hz can be present in the time history of the lateral turbulence component, particularly at the higher altitudes where airplane yawing motions are large. An assessment of the quantities comprising the vertical turbulence component leads to the conclusion that the vertical component is essentially accurate to zero frequency.

  13. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  14. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  15. Variability of rainfall over small areas

    NASA Technical Reports Server (NTRS)

    Runnels, R. C.

    1983-01-01

    A preliminary investigation was made to determine estimates of the number of raingauges needed in order to measure the variability of rainfall in time and space over small areas (approximately 40 sq miles). The literature on rainfall variability was examined and the types of empirical relationships used to relate rainfall variations to meteorological and catchment-area characteristics were considered. Relations between the coefficient of variation and areal-mean rainfall and area have been used by several investigators. These parameters seemed reasonable ones to use in any future study of rainfall variations. From a knowledge of an appropriate coefficient of variation (determined by the above-mentioned relations) the number rain gauges needed for the precise determination of areal-mean rainfall may be calculated by statistical estimation theory. The number gauges needed to measure the coefficient of variation over a 40 sq miles area, with varying degrees of error, was found to range from 264 (10% error, mean precipitation = 0.1 in) to about 2 (100% error, mean precipitation = 0.1 in).

  16. Error Propagation in the four terrestrial reference frames of the 2022 Modernized National Spatial Reference System

    NASA Astrophysics Data System (ADS)

    Roman, D. R.; Smith, D. A.

    2017-12-01

    In 2022, the National Geodetic Survey will replace all three NAD 83 reference frames with four new terrestrial reference frames. Each frame will be named after a tectonic plate (North American, Pacific, Caribbean and Mariana) and each will be related to the IGS frame through three Euler Pole parameters (EPPs). This talk will focus on three main areas of error propagation when defining coordinates in these four frames. Those areas are (1) use of the small angle approximation to relate true rotation about an Euler Pole to small rotations about three Cartesian axes (2) The current state of the art in determining the Euler Poles of these four plates and (3) the combination of both IGS Cartesian coordinate uncertainties and EPP uncertainties into coordinate uncertainties in the four new frames. Discussion will also include recent efforts at improving the Euler Poles for these frames and expected dates when errors in the EPPs will cause an unacceptable level of uncertainty in the four new terrestrial reference frames.

  17. A one-dimensional model of flow in a junction of thin channels, including arterial trees

    NASA Astrophysics Data System (ADS)

    Kozlov, V. A.; Nazarov, S. A.

    2017-08-01

    We study a Stokes flow in a junction of thin channels (of diameter O(h)) for fixed flows of the fluid at the inlet cross-sections and fixed peripheral pressure at the outlet cross-sections. On the basis of the idea of the pressure drop matrix, apart from Neumann conditions (fixed flow) and Dirichlet conditions (fixed pressure) at the outer vertices, the ordinary one-dimensional Reynolds equations on the edges of the graph are equipped with transmission conditions containing a small parameter h at the inner vertices, which are transformed into the classical Kirchhoff conditions as h\\to+0. We establish that the pre-limit transmission conditions ensure an exponentially small error O(e-ρ/h), ρ>0, in the calculation of the three-dimensional solution, but the Kirchhoff conditions only give polynomially small error. For the arterial tree, under the assumption that the walls of the blood vessels are rigid, for every bifurcation node a ( 2×2)-pressure drop matrix appears, and its influence on the transmission conditions is taken into account by means of small variations of the lengths of the graph and by introducing effective lengths of the one-dimensional description of blood vessels whilst keeping the Kirchhoff conditions and exponentially small approximation errors. We discuss concrete forms of arterial bifurcation and available generalizations of the results, in particular, the Navier-Stokes system of equations. Bibliography: 59 titles.

  18. Cosmological applications of Padé approximant

    NASA Astrophysics Data System (ADS)

    Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan

    2014-01-01

    As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.

  19. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  20. New approach to CT pixel-based photon dose calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, J.W.; Henkelman, R.M.

    The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less

  1. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  2. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  3. Highly turbulent solutions of the Lagrangian-averaged Navier-Stokes alpha model and their large-eddy-simulation potential.

    PubMed

    Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick

    2007-11-01

    We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.

  4. Bayesian Integration of Information in Hippocampal Place Cells

    PubMed Central

    Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert

    2014-01-01

    Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429

  5. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  6. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  7. Comparison of universal approximators incorporating partial monotonicity by structure.

    PubMed

    Minin, Alexey; Velikova, Marina; Lang, Bernhard; Daniels, Hennie

    2010-05-01

    Neural networks applied in control loops and safety-critical domains have to meet more requirements than just the overall best function approximation. On the one hand, a small approximation error is required; on the other hand, the smoothness and the monotonicity of selected input-output relations have to be guaranteed. Otherwise, the stability of most of the control laws is lost. In this article we compare two neural network-based approaches incorporating partial monotonicity by structure, namely the Monotonic Multi-Layer Perceptron (MONMLP) network and the Monotonic MIN-MAX (MONMM) network. We show the universal approximation capabilities of both types of network for partially monotone functions. On a number of datasets, we investigate the advantages and disadvantages of these approaches related to approximation performance, training of the model and convergence. 2009 Elsevier Ltd. All rights reserved.

  8. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  9. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  10. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  11. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  12. Simple solutions for relativistic generalizations of the Child-Langmuir law and the Langmuir-Blodgett law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yongpeng; Northwest Institute of Nuclear Technology, P.O. Box 69-13, Xi'an 710024; Liu Guozhi

    In this paper, the Child-Langmuir law and Langmuir-Blodgett law are generalized to the relativistic regime by a simple method. Two classical laws suitable for the nonrelativistic regime are modified to simple approximate expressions applicable for calculating the space-charge-limited currents of one-dimensional steady-state planar diodes and coaxial diodes under the relativistic regime. The simple approximate expressions, extending the Child-Langmuir law and Langmuir-Blodgett law to fit the full range of voltage, have small relative errors less than 1% for one-dimensional planar diodes and less than 5% for coaxial diodes.

  13. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  14. The Evolution and Discharge of Electric Fields within a Thunderstorm

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Nisbet, John S.; Kasha, John R.

    1989-05-01

    A 3-dimensional electrical model for a thunderstorm is developed and finite difference approximations to the model are analyzed. If the spatial derivatives are approximated by a method akin to the ☐ scheme and if the temporal derivative is approximated by either a backward difference or the Crank-Nicholson scheme, we show that the resulting discretization is unconditionally stable. The forward difference approximation to the time derivative is stable when the time step is sufficiently small relative to the ratio between the permittivity and the conductivity. Max-norm error estimates for the discrete approximations are established. To handle the propagation of lightning, special numerical techniques are devised based on the Inverse Matrix Modification Formula and Cholesky updates. Numerical comparisons between the model and theoretical results of Wilson and Holzer-Saxon are presented. We also apply our model to a storm observed at the Kennedy Space Center on July 11, 1978.

  15. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  16. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  17. Efficient time-dependent density functional theory approximations for hybrid density functionals: analytical gradients and parallelization.

    PubMed

    Petrenko, Taras; Kossmann, Simone; Neese, Frank

    2011-02-07

    In this paper, we present the implementation of efficient approximations to time-dependent density functional theory (TDDFT) within the Tamm-Dancoff approximation (TDA) for hybrid density functionals. For the calculation of the TDDFT/TDA excitation energies and analytical gradients, we combine the resolution of identity (RI-J) algorithm for the computation of the Coulomb terms and the recently introduced "chain of spheres exchange" (COSX) algorithm for the calculation of the exchange terms. It is shown that for extended basis sets, the RIJCOSX approximation leads to speedups of up to 2 orders of magnitude compared to traditional methods, as demonstrated for hydrocarbon chains. The accuracy of the adiabatic transition energies, excited state structures, and vibrational frequencies is assessed on a set of 27 excited states for 25 molecules with the configuration interaction singles and hybrid TDDFT/TDA methods using various basis sets. Compared to the canonical values, the typical error in transition energies is of the order of 0.01 eV. Similar to the ground-state results, excited state equilibrium geometries differ by less than 0.3 pm in the bond distances and 0.5° in the bond angles from the canonical values. The typical error in the calculated excited state normal coordinate displacements is of the order of 0.01, and relative error in the calculated excited state vibrational frequencies is less than 1%. The errors introduced by the RIJCOSX approximation are, thus, insignificant compared to the errors related to the approximate nature of the TDDFT methods and basis set truncation. For TDDFT/TDA energy and gradient calculations on Ag-TB2-helicate (156 atoms, 2732 basis functions), it is demonstrated that the COSX algorithm parallelizes almost perfectly (speedup ~26-29 for 30 processors). The exchange-correlation terms also parallelize well (speedup ~27-29 for 30 processors). The solution of the Z-vector equations shows a speedup of ~24 on 30 processors. The parallelization efficiency for the Coulomb terms can be somewhat smaller (speedup ~15-25 for 30 processors), but their contribution to the total calculation time is small. Thus, the parallel program completes a Becke3-Lee-Yang-Parr energy and gradient calculation on the Ag-TB2-helicate in less than 4 h on 30 processors. We also present the necessary extension of the Lagrangian formalism, which enables the calculation of the TDDFT excited state properties in the frozen-core approximation. The algorithms described in this work are implemented into the ORCA electronic structure system.

  18. Properties of Augmented Kohn-Sham Potential for Energy as Simple Sum of Orbital Energies.

    PubMed

    Zahariev, Federico; Levy, Mel

    2017-01-12

    A recent modification to the traditional Kohn-Sham method ( Levy , M. ; Zahariev , F. Phys. Rev. Lett. 2014 , 113 , 113002 ; Levy , M. ; Zahariev , F. Mol. Phys. 2016 , 114 , 1162 - 1164 ), which gives the ground-state energy as a direct sum of the occupied orbital energies, is discussed and its properties are numerically illustrated on representative atoms and ions. It is observed that current approximate density functionals tend to give surprisingly small errors for the highest occupied orbital energies that are obtained with the augmented potential. The appropriately shifted Kohn-Sham potential is the basic object within this direct-energy Kohn-Sham method and needs to be approximated. To facilitate approximations, several constraints to the augmented Kohn-Sham potential are presented.

  19. X-ray microanalytical surveys of minor element concentrations in unsectioned biological samples

    NASA Astrophysics Data System (ADS)

    Schofield, R. M. S.; Lefevre, H. W.; Overley, J. C.; Macdonald, J. D.

    1988-03-01

    Approximate concentration maps of small unsectioned biological samples are made using the pixel by pixel ratio of PIXE images to areal density images. Areal density images are derived from scanning transmission ion microscopy (STIM) proton energy-loss images. Corrections for X-ray production cross section variations, X-ray attenuation, and depth averaging are approximated or ignored. Estimates of the magnitude of the resulting error are made. Approximate calcium concentrations within the head of a fruit fly are reported. Concentrations in the retinula cell region of the eye average about 1 mg/g dry weight. Concentrations of zinc in the mandible of several ant species average about 40 mg/g. Zinc concentrations in the stomachs of these ants are at least 1 mg/g.

  20. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  1. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  2. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  3. Improved correlation corrections to the local-spin-density approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Painter, G.S.

    1981-10-15

    The accurate correlation energies for the para- and ferromagnetic states of the electron liquid calculated by Ceperley and Alder were recently used by Vosko, Wilk, and Nusair to produce a new correlation-energy density of increased accuracy and proper limiting behavior in the metallic density regime (r/sub s/< or =6). In the present work, the correlation potential in the local-spin-density approximation (LSDA) is derived from the correlation-energy-density representation of Vosko et al. Characteristics of the new exchange-correlation model are compared with those of the LSDA model of Gunnarsson and Lundqvist. Specific comparison is made between these models and exact results inmore » the treatment of atomic and molecular hydrogen. Since the new treatment of correlation primarily affects the region of small r/sub s/, which is exchange dominated, correlation corrections are small compared with errors in the exchange energy. Thus, in light atoms the improved correlation model leads to a reduced cancellation of error between exchange and correlation energies, emphasizing the necessity for improved exchange treatment. For more homogeneous systems, the model should offer real improvement. The present results obtained with precise treatment of correlation within the prescription of Vosko et al. serve to define the present limitations of the LSDA and indicate the importance of nonlocal corrections, particularly for atoms.« less

  4. A Reduced Dimension Static, Linearized Kalman Filter and Smoother

    NASA Technical Reports Server (NTRS)

    Fukumori, I.

    1995-01-01

    An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.

  5. The isotropic-nematic phase transition of tangent hard-sphere chain fluids—Pure components

    NASA Astrophysics Data System (ADS)

    van Westen, Thijs; Oyarzún, Bernardo; Vlugt, Thijs J. H.; Gross, Joachim

    2013-07-01

    An extension of Onsager's second virial theory is developed to describe the isotropic-nematic phase transition of tangent hard-sphere chain fluids. Flexibility is introduced by the rod-coil model. The effect of chain-flexibility on the second virial coefficient is described using an accurate, analytical approximation for the orientation-dependent pair-excluded volume. The use of this approximation allows for an analytical treatment of intramolecular flexibility by using a single pure-component parameter. Two approaches to approximate the effect of the higher virial coefficients are considered, i.e., the Vega-Lago rescaling and Scaled Particle Theory (SPT). The Onsager trial function is employed to describe the orientational distribution function. Theoretical predictions for the equation of state and orientational order parameter are tested against the results from Monte Carlo (MC) simulations. For linear chains of length 9 and longer, theoretical results are in excellent agreement with MC data. For smaller chain lengths, small errors introduced by the approximation of the higher virial coefficients become apparent, leading to a small under- and overestimation of the pressure and density difference at the phase transition, respectively. For rod-coil fluids of reasonable rigidity, a quantitative comparison between theory and MC simulations is obtained. For more flexible chains, however, both the Vega-Lago rescaling and SPT lead to a small underestimation of the location of the phase transition.

  6. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  7. Simplified mathematics for customized refractive surgery.

    PubMed

    Preussner, Paul Rolf; Wahl, Jochen

    2003-03-01

    To describe a simple mathematical approach to customized corneal refractive surgery or customized intraocular lens (IOL) design that allows "hypervision" and to investigate the accuracy limits. University eye hospital, Mainz, Germany. Corneal shape and at least 1 IOL surface are approximated by the well-known Cartesian conic section curves (ellipsoid, paraboloid, or hyperboloid). They are characterized by only 2 parameters, the vertex radius and the numerical eccentricity. Residual refraction errors for this approximation are calculated by numerical ray tracing. These errors can be displayed as a 2-dimensional refraction map across the pupil or by blurring the image of a Landolt ring superimposed on the retinal receptor grid, giving an overall impression of the visual outcome. If the eye is made emmetropic for paraxial rays and if the numerical eccentricities of the cornea and lens are appropriately fitted to each other, the residual refractive errors are small enough to allow hypervision. Visual acuity of at least 2.0 (20/10) appears to be possible, particularly for mesopic pupil diameters. However, customized optics may have limited application due to their sensitivity to misalignment errors such as decentrations or rotations. The mathematical approach described by Descartes 350 years ago is adequate to calculate hypervision optics for the human eye. The availability of suitable mathematical tools should, however, not be viewed with too much optimism as long as the accuracy of the implementation in surgical procedures is limited.

  8. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  9. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.

  10. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less

  11. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  12. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  13. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  14. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  15. Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin

    2016-06-01

    Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.

  16. Maximizing return on socioeconomic investment in phase II proof-of-concept trials.

    PubMed

    Chen, Cong; Beckman, Robert A

    2014-04-01

    Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.

  17. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  18. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  19. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  20. Quantum-Theoretical Methods and Studies Relating to Properties of Materials

    DTIC Science & Technology

    1989-12-19

    particularly sensitive to the behavior of the electron distribution close to the nuclei, which contributes only to E(l). Although the above results were...other condensed phases. So it was a useful test case to test the behavior of the theoretical computations for the gas phase relative to that in the...increasingly complicated and time- comsuming electron-correlation approximations should assure a small error in the theoret- ically computed enthalpy for a

  1. Rotational excitation of hydrogen molecules by collisions with hydrogen atoms. [interstellar gas energetics

    NASA Technical Reports Server (NTRS)

    Green, S.; Truhlar, D. G.

    1979-01-01

    Rate constants for rotational excitation of hydrogen molecules by collisions with hydrogen atoms have been obtained from quantum-mechanical calculations for kinetic temperatures between 100 and 5000 K. These calculations involve the rigid-rotator approximation, but other possible sources of error should be small. The calculations indicate that the early values of Nishimura are larger than accurate rigid-rotator values by about a factor of 20 or more.

  2. A code for optically thick and hot photoionized media

    NASA Astrophysics Data System (ADS)

    Dumont, A.-M.; Abrassart, A.; Collin, S.

    2000-05-01

    We describe a code designed for hot media (T >= a few 104 K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (1022 cm-2), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons with cloudy show that it gives similar results for small column densities. Several applications are mentioned.

  3. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  4. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  5. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  6. Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data

    NASA Astrophysics Data System (ADS)

    Gibbons, T. J.; Öztürk, E.; Sims, N. D.

    2018-01-01

    Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less

  8. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  9. Global Erratum for Kepler Q0-Q17 and K2 C0-C5 Short Cadence Data

    NASA Technical Reports Server (NTRS)

    Caldwell, Douglas; Van Cleve, Jeffrey E.

    2016-01-01

    An accounting error has scrambled much of the short-cadence collateral smear data used to correct for the effects of Keplers shutterless readout. This error has been present since launch and affects approximately half of all short-cadence targets observed by Kepler and K2 to date. The resulting calibration errors are present in both the short-cadence target pixel files and the short-cadence light curves for Kepler Data Releases 1-24 and K2 Data Releases 1-7. This error does not affect long-cadence data. Since it will take some time to correct this error and reprocess all Kepler and K2 data, a list of affected targets is provided. Even though the affected targets are readily identified, the science impact for any particular target may be difficult to assess. Since the smear signal is often small compared to the target signal, the effect is negligible for many targets. However, the smear signal is scene-dependent, so time varying signals can be introduced into any target by the other stars falling on the same CCD column. Some tips on how to assess the severity of the calibration error are provided in this document.

  10. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  11. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  12. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  13. A similarity retrieval approach for weighted track and ambient field of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun

    2018-03-01

    Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.

  14. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  15. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  16. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  17. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  18. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  19. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  20. Neural Network Control of a Parallel Hybrid-Electric Propulsion System for a Small Unmanned Aerial Vehicle

    DTIC Science & Technology

    2005-01-01

    Eppler , or Selig airfoil [147, 148] to be used. Other high lift wings could be used such as the low Reynolds number NASA LRN-I-1010 airfoil used in...Fraser, Airfoils at Low Speeds. Virginia Beach, VA: H.A. Stokely, 1989. [148] R. Eppler , Airfoil Design and Data. Berlin, Germany: Springer-Verlag... 61 Figure 3-4: RMS Error for CMAC Approximation (L=3) .......................................... 61 Figure 3-5: CMAC

  1. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  2. The calculating hemispheres: studies of a split-brain patient.

    PubMed

    Funnell, Margaret G; Colvin, Mary K; Gazzaniga, Michael S

    2007-06-11

    The purpose of the study was to investigate simple calculation in the two cerebral hemispheres of a split-brain patient. In a series of four experiments, the left hemisphere was superior to the right in simple calculation, confirming the previously reported left hemisphere specialization for calculation. In two different recognition paradigms, right hemisphere performance was at chance for all arithmetic operations, with the exception of subtraction in a two-alternative forced choice paradigm (performance was at chance when the lure differed from the correct answer by a magnitude of 1 but above chance when the magnitude difference was 4). In a recall paradigm, the right hemisphere performed above chance for both addition and subtraction, but performed at chance levels for multiplication and division. The error patterns in that experiment suggested that for subtraction and addition, the right hemisphere does have some capacity for approximating the solution even when it is unable to generate the exact solution. Furthermore, right hemisphere accuracy in addition and subtraction was higher for problems with small operands than with large operands. An additional experiment assessed approximate and exact addition in the two hemispheres for problems with small and large operands. The left hemisphere was equally accurate in both tasks but the right hemisphere was more accurate in approximate addition than in exact addition. In exact addition, right hemisphere accuracy was higher for problems with small operands than large, but the opposite pattern was found for approximate addition.

  3. Approximation-Based Discrete-Time Adaptive Position Tracking Control for Interior Permanent Magnet Synchronous Motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Yu, Haisheng; Chen, Bing; Lin, Chong

    2015-07-01

    This paper considers the problem of discrete-time adaptive position tracking control for a interior permanent magnet synchronous motor (IPMSM) based on fuzzy-approximation. Fuzzy logic systems are used to approximate the nonlinearities of the discrete-time IPMSM drive system which is derived by direct discretization using Euler method, and a discrete-time fuzzy position tracking controller is designed via backstepping approach. In contrast to existing results, the advantage of the scheme is that the number of the adjustable parameters is reduced to two only and the problem of coupling nonlinearity can be overcome. It is shown that the proposed discrete-time fuzzy controller can guarantee the tracking error converges to a small neighborhood of the origin and all the signals are bounded. Simulation results illustrate the effectiveness and the potentials of the theoretic results obtained.

  4. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. III. Cylindrical approximations for heat waves traveling inwards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, Trilateral Euregio Cluster, P.O. Box 1207, 3430 BE Nieuwegein

    In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on the heat equation in cylindrical geometry using the symmetry (Neumann) boundary condition at the plasma center. This means that the approximations derived here should be used only to estimate transport coefficients between the plasma center and the off-axis perturbative source. If the effect of cylindrical geometry is small, it is also possiblemore » to use semi-infinite domain approximations presented in Part I and Part II of this series. A number of new approximations are derived in this part, Part III, based upon continued fractions of the modified Bessel function of the first kind and the confluent hypergeometric function of the first kind. These approximations together with the approximations based on semi-infinite domains are compared for heat waves traveling towards the center. The relative error for the different derived approximations is presented for different values of the frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can be used to estimate the transport coefficients over a large parameter range for cases without convection and damping, cases with damping only, and cases with convection and damping. The relative error between the approximation and its underlying model is below 2% for the case, where only diffusivity and damping are considered. If also convectivity is considered, the diffusivity can be estimated well in a large region, but there is also a large region in which no suitable approximation is found. This paper is the third part (Part III) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part II, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.« less

  6. XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar

    2017-04-01

    Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).

  7. Are Planetary Regolith Particles Back Scattering? Response to a Paper by M. Mishchenko

    NASA Technical Reports Server (NTRS)

    Hapke, Bruce

    1996-01-01

    In a recent paper Mishchenko asserts that soil particles are strongly forward scattering, whereas particles on the surfaces of objects in the solar system have been inferred to be back scattering. Mishchenko suggests that this apparent discrepancy is an artifact caused by using an approximate light scattering model to analyse the data, and that planetary regolith particles are actually strong forward scatterers. The purpose of the present paper is to point out the errors in Mishchenko's paper and to show from both theoretical arguments and experimental data that inhomogencous composite particles which are large compared to the wavelength of visible light, such as rock fragments and agglutinates, can be strongly back scattering and are the fundamental scatterers in media composed of them. Such particles appear to be abundant in planetary regoliths and can account for the back scattering character of the surfaces of many bodies in the solar system. If the range of phase angles covered by a data set is insufficient, serious errors in retrieving the particle scattering properties can result whether an exact or approximate scattering model is used. However, if the data set includes both large and small phase angles, approximate regolith scattering models can correctly retrieve the sign of the particle scattering asymmetry.

  8. Assessment of Biases in MODIS Surface Reflectance Due to Lambertian Approximation

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei I.; Privette, Jeffrey L.; Cook, Robert B.; SanthanaVannan, Suresh K.; Vermote, Eric F.; Schaaf, Crystal

    2010-01-01

    Using MODIS data and the AERONET-based Surface Reflectance Validation Network (ASRVN), this work studies errors of MODIS atmospheric correction caused by the Lambertian approximation. On one hand, this approximation greatly simplifies the radiative transfer model, reduces the size of the look-up tables, and makes operational algorithm faster. On the other hand, uncompensated atmospheric scattering caused by Lambertian model systematically biases the results. For example, for a typical bowl-shaped bidirectional reflectance distribution function (BRDF), the derived reflectance is underestimated at high solar or view zenith angles, where BRDF is high, and is overestimated at low zenith angles where BRDF is low. The magnitude of biases grows with the amount of scattering in the atmosphere, i.e., at shorter wavelengths and at higher aerosol concentration. The slope of regression of Lambertian surface reflectance vs. ASRVN bidirectional reflectance factor (BRF) is about 0.85 in the red and 0.6 in the green bands. This error propagates into the MODIS BRDF/albedo algorithm, slightly reducing the magnitude of overall reflectance and anisotropy of BRDF. This results in a small negative bias of spectral surface albedo. An assessment for the GSFC (Greenbelt, USA) validation site shows the albedo reduction by 0.004 in the near infrared, 0.005 in the red, and 0.008 in the green MODIS bands.

  9. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  10. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  11. Novel approximation of misalignment fading modeled by Beckmann distribution on free-space optical links.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2016-10-03

    A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.

  12. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  13. Residue frequencies and pairing preferences at protein-protein interfaces.

    PubMed

    Glaser, F; Steinberg, D M; Vakser, I A; Ben-Tal, N

    2001-05-01

    We used a nonredundant set of 621 protein-protein interfaces of known high-resolution structure to derive residue composition and residue-residue contact preferences. The residue composition at the interfaces, in entire proteins and in whole genomes correlates well, indicating the statistical strength of the data set. Differences between amino acid distributions were observed for interfaces with buried surface area of less than 1,000 A(2) versus interfaces with area of more than 5,000 A(2). Hydrophobic residues were abundant in large interfaces while polar residues were more abundant in small interfaces. The largest residue-residue preferences at the interface were recorded for interactions between pairs of large hydrophobic residues, such as Trp and Leu, and the smallest preferences for pairs of small residues, such as Gly and Ala. On average, contacts between pairs of hydrophobic and polar residues were unfavorable, and the charged residues tended to pair subject to charge complementarity, in agreement with previous reports. A bootstrap procedure, lacking from previous studies, was used for error estimation. It showed that the statistical errors in the set of pairing preferences are generally small; the average standard error is approximately 0.2, i.e., about 8% of the average value of the pairwise index (2.9). However, for a few pairs (e.g., Ser-Ser and Glu-Asp) the standard error is larger in magnitude than the pairing index, which makes it impossible to tell whether contact formation is favorable or unfavorable. The results are interpreted using physicochemical factors and their implications for the energetics of complex formation and for protein docking are discussed. Proteins 2001;43:89-102. Copyright 2001 Wiley-Liss, Inc.

  14. Subglacial drainage effects on surface motion on a small surge type alpine glacier on the St. Elias range, Yukon Territory, Canada.

    NASA Astrophysics Data System (ADS)

    Rada, C.; Schoof, C.; King, M. A.; Flowers, G. E.; Haber, E.

    2017-12-01

    Subglacial drainage is known to play an important role in glacier dynamics trough its influence on basal sliding. However, drainage is also one of the most poorly understood process in glacier flow due to the difficulties of observing, identifying and modeling the physics involved. In an effort to improve understanding of subglacial processes, we have monitored a small, approximately 100 m thick surge-type alpine glacier for nine years. Over 300 boreholes were instrumented with pressure transducers over a 0.5 km² in its upper ablation area, in addition to a weather station and a permanent GPS array consisting on 16 dual-frequency receivers within the study area. We study the influence of the subglacial drainage system on the glacier surface velocity. However, pressure variations in the drainage system during the melt season are dominated by diurnal oscillations.Therefore, GPS solutions have to be computed at sub-diurnal time intervals in order to explore the effects of transient diurnal pressure variations. Due to the small displacements of the surface of the glacier over those periods (4-10 cm/day), sub-diurnal solutions are dominated by errors, making it impossible to observe the diurnal variations in glacier motion. We have found that the main source of error is GPS multipath. This error source does largely cancel out when solutions are computed over 24 hour periods (or more precisely, over a sidereal day), but solution precisions decrease quickly when computed over shorter periods of time. Here we present an inverse problem approach to remove GPS multipath errors on glaciers, and use the reconstructed glacier motion to explore how the subglacial drainage morphology and effective pressure influence glacier dynamics at multiple time scales.

  15. Insight into organic reactions from the direct random phase approximation and its corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruzsinszky, Adrienn; Zhang, Igor Ying; Scheffler, Matthias

    2015-10-14

    The performance of the random phase approximation (RPA) and beyond-RPA approximations for the treatment of electron correlation is benchmarked on three different molecular test sets. The test sets are chosen to represent three typical sources of error which can contribute to the failure of most density functional approximations in chemical reactions. The first test set (atomization and n-homodesmotic reactions) offers a gradually increasing balance of error from the chemical environment. The second test set (Diels-Alder reaction cycloaddition = DARC) reflects more the effect of weak dispersion interactions in chemical reactions. Finally, the third test set (self-interaction error 11 = SIE11)more » represents reactions which are exposed to noticeable self-interaction errors. This work seeks to answer whether any one of the many-body approximations considered here successfully addresses all these challenges.« less

  16. First-principles energetics of water clusters and ice: A many-body analysis

    NASA Astrophysics Data System (ADS)

    Gillan, M. J.; Alfè, D.; Bartók, A. P.; Csányi, G.

    2013-12-01

    Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.

  17. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  18. Error analysis of finite difference schemes applied to hyperbolic initial boundary value problems

    NASA Technical Reports Server (NTRS)

    Skollermo, G.

    1979-01-01

    Finite difference methods for the numerical solution of mixed initial boundary value problems for hyperbolic equations are studied. The reported investigation has the objective to develop a technique for the total error analysis of a finite difference scheme, taking into account initial approximations, boundary conditions, and interior approximation. Attention is given to the Cauchy problem and the initial approximation, the homogeneous problem in an infinite strip with inhomogeneous boundary data, the reflection of errors in the boundaries, and two different boundary approximations for the leapfrog scheme with a fourth order accurate difference operator in space.

  19. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  20. Push it to the limit: Characterizing the convergence of common sequences of basis sets for intermolecular interactions as described by density functional theory

    NASA Astrophysics Data System (ADS)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2016-05-01

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.

  1. Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2009-10-01

    Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.

  2. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  3. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  4. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  6. Improved Apparatus for Measuring Distance Between Axles

    NASA Technical Reports Server (NTRS)

    Willard, Douglas E.; Townsend, Ivan I., III

    2003-01-01

    An improved version of an optoelectronic apparatus for measuring distances of the order of tens of feet with an error no larger than a small fraction of an inch (a few millimeters) has been built. Like the previous version, the present improved version of the apparatus is designed to measure the distance approximately equal to 66 ft (approximately equal to 20 m) between the axes of rotation of the front and rear tires of the space shuttle orbiter as it rests in a ground-based processing facility. Like the previous version, the present version could also be adapted for similar purposes in other settings: Examples include measuring perpendicular distance from a wall in a building, placement of architectural foundations, and general alignment and measurement operations.

  7. Evaluation of lens distortion errors in video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo

    1993-01-01

    In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.

  8. A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering

    PubMed Central

    Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-01-01

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  9. Generalized Fisher matrices

    NASA Astrophysics Data System (ADS)

    Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.

    2014-12-01

    The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.

  10. A Micromechanical INS/GPS System for Small Satellites

    NASA Technical Reports Server (NTRS)

    Barbour, N.; Brand, T.; Haley, R.; Socha, M.; Stoll, J.; Ward, P.; Weinberg, M.

    1995-01-01

    The cost and complexity of large satellite space missions continue to escalate. To reduce costs, more attention is being directed toward small lightweight satellites where future demand is expected to grow dramatically. Specifically, micromechanical inertial systems and microstrip global positioning system (GPS) antennas incorporating flip-chip bonding, application specific integrated circuits (ASIC) and MCM technologies will be required. Traditional microsatellite pointing systems do not employ active control. Many systems allow the satellite to point coarsely using gravity gradient, then attempt to maintain the image on the focal plane with fast-steering mirrors. Draper's approach is to actively control the line of sight pointing by utilizing on-board attitude determination with micromechanical inertial sensors and reaction wheel control actuators. Draper has developed commercial and tactical-grade micromechanical inertial sensors, The small size, low weight, and low cost of these gyroscopes and accelerometers enable systems previously impractical because of size and cost. Evolving micromechanical inertial sensors can be applied to closed-loop, active control of small satellites for micro-radian precision-pointing missions. An inertial reference feedback control loop can be used to determine attitude and line of sight jitter to provide error information to the controller for correction. At low frequencies, the error signal is provided by GPS. At higher frequencies, feedback is provided by the micromechanical gyros. This blending of sensors provides wide-band sensing from dc to operational frequencies. First order simulation has shown that the performance of existing micromechanical gyros, with integrated GPS, is feasible for a pointing mission of 10 micro-radians of jitter stability and approximately 1 milli-radian absolute error, for a satellite with 1 meter antenna separation. Improved performance micromechanical sensors currently under development will be suitable for a range of micro-nano-satellite applications.

  11. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  12. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  13. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  15. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  17. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  18. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verma, Prakash; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energymore » corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.« less

  20. Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.

    PubMed

    Wang, Wei; Tong, Shaocheng

    2018-02-01

    This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.

  1. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  2. Summation rules for a fully nonlocal energy-based quasicontinuum method

    NASA Astrophysics Data System (ADS)

    Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.

    2015-09-01

    The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.

  3. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  4. Modified expression for bulb-tracer depletion—Effect on argon dating standards

    USGS Publications Warehouse

    Fleck, Robert J.; Calvert, Andrew T.

    2014-01-01

    40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.

  5. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  6. Viète's Formula and an Error Bound without Taylor's Theorem

    ERIC Educational Resources Information Center

    Boucher, Chris

    2018-01-01

    This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.

  7. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  8. Errors from approximation of ODE systems with reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    2016-12-30

    This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.

  9. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    NASA Technical Reports Server (NTRS)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  10. Effects of tropospheric and ionospheric refraction errors in the utilization of GEOS-C altimeter data

    NASA Technical Reports Server (NTRS)

    Goad, C. C.

    1977-01-01

    The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.

  11. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-01-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  12. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  13. Fitting Photometry of Blended Microlensing Events

    NASA Astrophysics Data System (ADS)

    Thomas, Christian L.; Griest, Kim

    2006-03-01

    We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.

  14. Connectivity-Preserving Approach for Distributed Adaptive Synchronized Tracking of Networked Uncertain Nonholonomic Mobile Robots.

    PubMed

    Yoo, Sung Jin; Park, Bong Seok

    2017-09-06

    This paper addresses a distributed connectivity-preserving synchronized tracking problem of multiple uncertain nonholonomic mobile robots with limited communication ranges. The information of the time-varying leader robot is assumed to be accessible to only a small fraction of follower robots. The main contribution of this paper is to introduce a new distributed nonlinear error surface for dealing with both the synchronized tracking and the preservation of the initial connectivity patterns among nonholonomic robots. Based on this nonlinear error surface, the recursive design methodology is presented to construct the approximation-based local adaptive tracking scheme at the robot dynamic level. Furthermore, a technical lemma is established to analyze the stability and the connectivity preservation of the total closed-loop control system in the Lyapunov sense. An example is provided to illustrate the effectiveness of the proposed methodology.

  15. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  16. Spectroscopic planetary detection

    NASA Technical Reports Server (NTRS)

    Deming, Drake

    1988-01-01

    One of the most promising methods for the detection of extra-solar planets is the spectroscopic method, where a small Doppler shift (approximately 10 meters/sec) in the spectrum of the parent star reveals the presence of planetary companions. However, solar-type stars may show spurious Doppler shifts due to surface activity. If these effects are periodic, as is the solar activity cycle, then they may masquerade as planetary companions. The goal of this investigation is to determine whether the solar cycle affects the Doppler stability of integrated sunlight. Observations of integrated sunlight are made in the near infrared (approximately 2 micrometer), using the Kitt Peak McMath Fourier transform spectrometer, with an N2O gas absorption cell for calibration. Researchers currently achieve an accuracy of approximately 5 meters/sec. Solar rotation velocities vary by plus or minus 2000 meters/sec across the solar disk, and imperfect optical integration of these velocities is the principal source of error. We have been monitoring the apparent velocity of integrated sunlight since 1983. They initially saw a decrease of approximately 30 meters/sec in the integrated light velocity from 1983 through 1985, but in 1987 to 1988 the integrated light velocity returned to its 1983 level. It is too early to say whether these changes are solar-cycle related. Although the FTS, unlike a slit spectrograph, has a large field of view, researchers are always looking for ways to improve the optical integration of the solar disk. They recently made an improvement in the method used to optically collimate the FTS, and this has reduced the error level, eliminating some systematic effects seen earlier.

  17. A Multi-Band Analytical Algorithm for Deriving Absorption and Backscattering Coefficients from Remote-Sensing Reflectance of Optically Deep Waters

    NASA Technical Reports Server (NTRS)

    Lee, Zhong-Ping; Carder, Kendall L.

    2001-01-01

    A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.

  18. Assessing stand water use in four coastal wetland forests using sapflow techniques: annual estimates, errors and associated uncertainties

    USGS Publications Warehouse

    Krauss, Ken W.; Duberstein, Jamie A.; Conner, William H.

    2015-01-01

    Forests comprise approximately 37% of the terrestrial land surface and influence global water cycling. However, very little attention has been directed towards understanding environmental impacts on stand water use (S) or in identifying rates of S from specific forested wetlands. Here, we use sapflow techniques to address two separate but linked objectives: (1) determine S in four, hydrologically distinctive South Carolina (USA) wetland forests from 2009–2010 and (2) describe potential error, uncertainty and stand-level variation associated with these assessments. Sapflow measurements were made from a number of tree species for approximately 2–8 months over 2 years to initiate the model, which was applied to canopy trees (DBH > 10–20 cm). We determined that S in three healthy forested wetlands varied from 1.97–3.97 mm day−1 or 355–687 mm year−1 when scaled. In contrast, saltwater intrusion impacted individual tree physiology and size class distributions on a fourth site, which decreased S to 0.61–1.13 mm day−1 or 110–196 mm year−1. The primary sources of error in estimations using sapflow probes would relate to calibration of probes and standardization relative to no flow periods and accounting for accurate sapflow attenuation with radial depth into the sapwood by species and site. Such inherent variation in water use among wetland forest stands makes small differences in S (<200 mm year−1) difficult to detect statistically through modelling, even though small differences may be important to local water cycling. These data also represent some of the first assessments of S from temperate, coastal forested wetlands along the Atlantic coast of the USA.

  19. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  20. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  1. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  2. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  3. Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.

    PubMed

    Chang, Yeong-Chan

    2009-02-01

    This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.

  4. Vacuum Stress in Schwarzschild Spacetime

    NASA Astrophysics Data System (ADS)

    Howard, Kenneth Webster

    Vacuum stress in the conformally invariant scalar field in the region exterior to the horizon of a Schwarzschild black hole is examined. In the Hartle-Hawking vacuum state <(phi)('2)> and are calculated. Covariant point-splitting renormalization is used, as is a mode sum expression for the Hartle-Hawking propagator. It is found that <(phi)('2)> separates naturally into two parts, a part that has a simple analytic form coinciding with the approximate expression of Whiting and Page, and a small remainder. The results of our numerical evaluation of the remainder agree with, but are more accurate than, those previously given by Fawcett. We find that also separates into two terms. The first coincides with the approximate expression obtained by Page with a Gaussian approximation to the proper time Green function. The second term, composed of sums over mode functions, is evaluated numerically. It is found that the total expression is in good qualitative agreement with Page's approximation. Our results disagree with previous numerical results given by Fawcett. The error in Fawcett's calculation is explained.

  5. Approximation of a radial diffusion model with a multiple-rate model for hetero-disperse particle mixtures

    PubMed Central

    Ju, Daeyoung; Young, Thomas M.; Ginn, Timothy R.

    2012-01-01

    An innovative method is proposed for approximation of the set of radial diffusion equations governing mass exchange between aqueous bulk phase and intra-particle phase for a hetero-disperse mixture of particles such as occur in suspension in surface water, in riverine/estuarine sediment beds, in soils and in aquifer materials. For this purpose the temporal variation of concentration at several uniformly distributed points within a normalized representative particle with spherical, cylindrical or planar shape is fitted with a 2-domain linear reversible mass exchange model. The approximation method is then superposed in order to generalize the model to a hetero-disperse mixture of particles. The method can reduce the computational effort needed in solving the intra-particle mass exchange of a hetero-disperse mixture of particles significantly and also the error due to the approximation is shown to be relatively small. The method is applied to describe desorption batch experiment of 1,2-Dichlorobenzene from four different soils with known particle size distributions and it could produce good agreement with experimental data. PMID:18304692

  6. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  7. Statistics of the radiated field of a space-to-earth microwave power transfer system

    NASA Technical Reports Server (NTRS)

    Stevens, G. H.; Leininger, G.

    1976-01-01

    Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).

  8. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  9. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  10. Precise Distances for Main-belt Asteroids in Only Two Nights

    NASA Astrophysics Data System (ADS)

    Heinze, Aren N.; Metchev, Stanimir

    2015-10-01

    We present a method for calculating precise distances to asteroids using only two nights of data from a single location—far too little for an orbit—by exploiting the angular reflex motion of the asteroids due to Earth’s axial rotation. We refer to this as the rotational reflex velocity method. While the concept is simple and well-known, it has not been previously exploited for surveys of main belt asteroids (MBAs). We offer a mathematical development, estimates of the errors of the approximation, and a demonstration using a sample of 197 asteroids observed for two nights with a small, 0.9-m telescope. This demonstration used digital tracking to enhance detection sensitivity for faint asteroids, but our distance determination works with any detection method. Forty-eight asteroids in our sample had known orbits prior to our observations, and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the Minor Planet Center. In contrast to our two-night results, distance determination by fitting approximate orbits requires observations spanning 7-10 nights. Once an asteroid’s distance is known, its absolute magnitude and size (given a statistically estimated albedo) may immediately be calculated. Our method will therefore greatly enhance the efficiency with which 4m and larger telescopes can probe the size distribution of small (e.g., 100 m) MBAs. This distribution remains poorly known, yet encodes information about the collisional evolution of the asteroid belt—and hence the history of the Solar System.

  11. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  12. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  13. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis.

  14. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J

    2012-10-01

    The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.

  15. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    PubMed Central

    Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-01-01

    Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989

  16. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. I. Semi-infinite slab approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein

    2014-11-15

    In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less

  17. Light diffusion in N-layered turbid media: steady-state domain.

    PubMed

    Liemert, André; Kienle, Alwin

    2010-01-01

    We deal with light diffusion in N-layered turbid media. The steady-state diffusion equation is solved for N-layered turbid media having a finite or an infinitely thick N'th layer. Different refractive indices are considered in the layers. The Fourier transform formalism is applied to derive analytical solutions of the fluence rate in Fourier space. The inverse Fourier transform is calculated using four different methods to test their performance and accuracy. Further, to avoid numerical errors, approximate formulas in Fourier space are derived. Fast solutions for calculation of the spatially resolved reflectance and transmittance from the N-layered turbid media ( approximately 10 ms) with small relative differences (<10(-7)) are found. Additionally, the solutions of the diffusion equation are compared to Monte Carlo simulations for turbid media having up to 20 layers.

  18. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  19. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  20. Design and simulation of a 800 Mbit/s data link for magnetic resonance imaging wearables.

    PubMed

    Vogt, Christian; Buthe, Lars; Petti, Luisa; Cantarella, Giuseppe; Munzenrieder, Niko; Daus, Alwin; Troster, Gerhard

    2015-08-01

    This paper presents the optimization of electronic circuitry for operation in the harsh electro magnetic (EM) environment during a magnetic resonance imaging (MRI) scan. As demonstrator, a device small enough to be worn during the scan is optimized. Based on finite element method (FEM) simulations, the induced current densities due to magnetic field changes of 200 T s(-1) were reduced from 1 × 10(10) A m(-2) by one order of magnitude, predicting error-free operation of the 1.8V logic employed. The simulations were validated using a bit error rate test, which showed no bit errors during a MRI scan sequence. Therefore, neither the logic, nor the utilized 800 Mbit s(-1) low voltage differential swing (LVDS) data link of the optimized wearable device were significantly influenced by the EM interference. Next, the influence of ferro-magnetic components on the static magnetic field and consequently the image quality was simulated showing a MRI image loss with approximately 2 cm radius around a commercial integrated circuit of 1×1 cm(2). This was successively validated by a conventional MRI scan.

  1. Increasing the applicability of density functional theory. V. X-ray absorption spectra with ionization potential corrected exchange and correlation potentials.

    PubMed

    Verma, Prakash; Bartlett, Rodney J

    2016-07-21

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energy corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.

  2. Effect of twist on single-mode fiber-optic 3 × 3 couplers

    NASA Astrophysics Data System (ADS)

    Chen, Dandan; Ji, Minning; Peng, Lei

    2018-01-01

    In the fabricating process of a 3 × 3 fused tapered coupler, the three fibers are usually twisted to be close-contact. The effect of twist on 3 × 3 fused tapered couplers is investigated in this paper. It is found that though a linear 3 × 3 coupler may realize equal power splitting ratio theoretically by twisting a special angle, it is hard to be fabricated actually because the twist angle and the coupler's length must be determined in advance. While an equilateral 3 × 3 coupler can not only realize approximate equal power splitting ratio theoretically but can also be fabricated just by controlling the elongation length. The effect of twist on the equilateral 3 × 3 coupler lies in the relationship between the equal ratio error and the twist angle. The more the twist angle is, the larger the equal ratio error may be. The twist angle usually should be no larger than 90° on one coupling period length in order to keep the equal ratio error small enough. The simulation results agree well with the experimental data.

  3. Predictive error detection in pianists: a combined ERP and motion capture study

    PubMed Central

    Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari

    2013-01-01

    Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID:24133428

  4. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  5. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  6. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  7. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  8. Shape and energy consistent pseudopotentials for correlated electron systems

    PubMed Central

    Needs, R. J.

    2017-01-01

    A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391

  9. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  10. Simulation of eye-tracker latency, spot size, and ablation pulse depth on the correction of higher order wavefront aberrations with scanning spot laser systems.

    PubMed

    Bueeler, Michael; Mrochen, Michael

    2005-01-01

    The aim of this theoretical work was to investigate the robustness of scanning spot laser treatments with different laser spot diameters and peak ablation depths in case of incomplete compensation of eye movements due to eye-tracker latency. Scanning spot corrections of 3rd to 5th Zernike order wavefront errors were numerically simulated. Measured eye-movement data were used to calculate the positioning error of each laser shot assuming eye-tracker latencies of 0, 5, 30, and 100 ms, and for the case of no eye tracking. The single spot ablation depth ranged from 0.25 to 1.0 microm and the spot diameter from 250 to 1000 microm. The quality of the ablation was rated by the postoperative surface variance and the Strehl intensity ratio, which was calculated after a low-pass filter was applied to simulate epithelial surface smoothing. Treatments performed with nearly ideal eye tracking (latency approximately 0) provide the best results with a small laser spot (0.25 mm) and a small ablation depth (250 microm). However, combinations of a large spot diameter (1000 microm) and a small ablation depth per pulse (0.25 microm) yield the better results for latencies above a certain threshold to be determined specifically. Treatments performed with tracker latencies in the order of 100 ms yield similar results as treatments done completely without eye-movement compensation. CONCWSIONS: Reduction of spot diameter was shown to make the correction more susceptible to eye movement induced error. A smaller spot size is only beneficial when eye movement is neutralized with a tracking system with a latency <5 ms.

  11. The effect of withdrawal of visual presentation of errors upon the frequency spectrum of tremor in a manual task

    PubMed Central

    Sutton, G. G.; Sykes, K.

    1967-01-01

    1. When a subject attempts to exert a steady pressure on a joystick he makes small unavoidable errors which, irrespective of their origin or frequency, may be called tremor. 2. Frequency analysis shows that low frequencies always contribute much more to the total error than high frequencies. If the subject is not allowed to check his performance visually, but has to rely on sensations of pressure in the finger tips, etc., the error power spectrum plotted on logarithmic co-ordinates approximates to a straight line falling at 6 db/octave from 0·4 to 9 c/s. In other words the amplitude of the tremor component at each frequency is inversely proportional to frequency. 3. When the subject is given a visual indication of his errors on an oscilloscope the shape of the tremor spectrum alters. The most striking change is the appearance of a tremor peak at about 9 c/s, but there is also a significant increase of error in the range 1-4 c/s. The extent of these changes varies from subject to subject. 4. If the 9 c/s peak represents oscillation of a muscle length-servo it would appear that greater use is made of this servo when positional information is available from the eyes than when proprioceptive impulses from the limbs have to be relied on. ImagesFig. 2 PMID:6048997

  12. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  13. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  14. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  15. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  16. On the convergence of local approximations to pseudodifferential operators with applications

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1994-01-01

    We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.

  17. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  18. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  19. First-row diatomics: Calculation of the geometry and energetics using self-consistent gradient-functional approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutzler, F.W.; Painter, G.S.

    1992-02-15

    A fully self-consistent series of nonlocal (gradient) density-functional calculations has been carried out using the augmented-Gaussian-orbital method to determine the magnitude of gradient corrections to the potential-energy curves of the first-row diatomics, Li{sub 2} through F{sub 2}. Both the Langreth-Mehl-Hu and the Perdew-Wang gradient-density functionals were used in calculations of the binding energy, bond length, and vibrational frequency for each dimer. Comparison with results obtained in the local-spin-density approximation (LSDA) using the Vosko-Wilk-Nusair functional, and with experiment, reveals that bond lengths and vibrational frequencies are rather insensitive to details of the gradient functionals, including self-consistency effects, but the gradient correctionsmore » reduce the overbinding commonly observed in the LSDA calculations of first-row diatomics (with the exception of Li{sub 2}, the gradient-functional binding-energy error is only 50--12 % of the LSDA error). The improved binding energies result from a large differential energy lowering, which occurs in open-shell atoms relative to the diatomics. The stabilization of the atom arises from the use of nonspherical charge and spin densities in the gradient-functional calculations. This stabilization is negligibly small in LSDA calculations performed with nonspherical densities.« less

  20. Simplification of the kinematic model of human movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; del Prado Martinez, David

    2013-10-01

    The paper presents a methods of simplification of the human gait model. The experimental data were obtained in the laboratory of the group SATI in the Electronics Engineering Department of the University of Valencia. As a result of the Mean Double Step (MDS) procedure, the human motion were described by a matrix containing the Cartesian coordinates of 26 markers placed on the human body recorded in the 100 time points. With these data it has been possible to develop an software application which performs a wide diversity of tasks like array simplification, mask calculation for the simplification, error calculation as well as tools for signals comparison and movement animation of the markers. Simplifications were made by the spectral analysis of signals and calculating the standard deviation of the differences between the signal and its approximation. Using this method the signals of displacement could be written as the time series limited to a small number of harmonic signals. This approach allows us for a high degree of data compression. The model presented in this work can be applied into the context of medical diagnostics or rehabilitation because for a given approximation error and a large number of harmonics may demonstrate some abnormalities (of orthopaedic symptoms) in the gait cycle analysis.

  1. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  2. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  3. A theory for modeling ground-water flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, Richard L.

    2004-01-01

    Construction of a ground-water model for a field area is not a straightforward process. Data are virtually never complete or detailed enough to allow substitution into the model equations and direct computation of the results of interest. Formal model calibration through optimization, statistical, and geostatistical methods is being applied to an increasing extent to deal with this problem and provide for quantitative evaluation and uncertainty analysis of the model. However, these approaches are hampered by two pervasive problems: 1) nonlinearity of the solution of the model equations with respect to some of the model (or hydrogeologic) input variables (termed in this report system characteristics) and 2) detailed and generally unknown spatial variability (heterogeneity) of some of the system characteristics such as log hydraulic conductivity, specific storage, recharge and discharge, and boundary conditions. A theory is developed in this report to address these problems. The theory allows construction and analysis of a ground-water model of flow (and, by extension, transport) in heterogeneous media using a small number of lumped or smoothed system characteristics (termed parameters). The theory fully addresses both nonlinearity and heterogeneity in such a way that the parameters are not assumed to be effective values. The ground-water flow system is assumed to be adequately characterized by a set of spatially and temporally distributed discrete values, ?, of the system characteristics. This set contains both small-scale variability that cannot be described in a model and large-scale variability that can. The spatial and temporal variability in ? are accounted for by imagining ? to be generated by a stochastic process wherein ? is normally distributed, although normality is not essential. Because ? has too large a dimension to be estimated using the data normally available, for modeling purposes ? is replaced by a smoothed or lumped approximation y?. (where y is a spatial and temporal interpolation matrix). Set y?. has the same form as the expected value of ?, y 'line' ? , where 'line' ? is the set of drift parameters of the stochastic process; ?. is a best-fit vector to ?. A model function f(?), such as a computed hydraulic head or flux, is assumed to accurately represent an actual field quantity, but the same function written using y?., f(y?.), contains error from lumping or smoothing of ? using y?.. Thus, the replacement of ? by y?. yields nonzero mean model errors of the form E(f(?)-f(y?.)) throughout the model and covariances between model errors at points throughout the model. These nonzero means and covariances are evaluated through third and fifth-order accuracy, respectively, using Taylor series expansions. They can have a significant effect on construction and interpretation of a model that is calibrated by estimating ?.. Vector ?.. is estimated as 'hat' ? using weighted nonlinear least squares techniques to fit a set of model functions f(y'hat' ?) to a. corresponding set of observations of f(?), Y. These observations are assumed to be corrupted by zero-mean, normally distributed observation errors, although, as for ?, normality is not essential. An analytical approximation of the nonlinear least squares solution is obtained using Taylor series expansions and perturbation techniques that assume model and observation errors to be small. This solution is used to evaluate biases and other results to second-order accuracy in the errors. The correct weight matrix to use in the analysis is shown to be the inverse of the second-moment matrix E(Y-f(y?.))(Y-f(y?.))', but the weight matrix is assumed to be arbitrary in most developments. The best diagonal approximation is the inverse of the matrix of diagonal elements of E(Y-f(y?.))(Y-f(y?.))', and a method of estimating this diagonal matrix when it is unknown is developed using a special objective function to compute 'hat' ?. When considered to be an estimate of f

  4. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  5. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  6. Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit

    NASA Astrophysics Data System (ADS)

    Sikharulidze, Yu. G.; Korchagin, A. N.

    2002-01-01

    The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.

  7. Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yubin; Yuan, Zhen, E-mail: zhenyuan@umac.mo

    Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects withmore » different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their methods are able to resolve the intrinsic difficulties that occur when quantitative PAT is conducted by combining conventional PAT with the diffusion approximation or with radiation transport modeling.« less

  8. Evaluating the sensitivity of an ice sheet model to changes in bed elevation and inclusion of membrane stresses

    NASA Astrophysics Data System (ADS)

    Aschwanden, Andy; Bueler, Ed; Khroulev, Constantine

    2010-05-01

    To predict Greenland's contribution to global sea level rise in the next few centuries with some confidence, an accurate representation of its current state is crucial. Simulations of the present state of Greenland using the "Parallel Ice Sheet Model" (PISM) capture the essential flow features but overestimate the current volume by about 30%. Possible sources of error include (1) limited understanding of physical processes involved, (2) the choice of approximations made by the numerical model, (3) values of tunable parameters, and (4) uncertainties in boundary conditions. The response of an ice sheet model to given forcing contains the above mentioned error sources, with unknown weights. In this work we focus on a small subset, namely errors arising from uncertainties in bed elevation and whether or not membrane stresses are included in the stress balance. CReSIS provides recently updated bedrock maps for Greenland include high-resolution data for Jacobshavn Isbræ and Petermann Glacier. We present a four-way comparison between the original BEDMAP, the new CReSIS bedrock data, a non-sliding shallow ice model, and hybrid model which includes the shallow shelf approximation as a sliding law. Large gradients possibly found in high-resolution bedrock elevation are expected to make a hybrid model the more appropriate choice. To elucidate this question, runs are performed on a unprecedented high spatial resolution of 2km for the whole ice sheet. Finally, model predictions are evaluated against observed quantities such as surface velocities, ice thickness, and temperature profiles in bore holes using different metrics.

  9. Effects of the Temporal Variability of Evapotranspiration on Hydrologic Simulation in Central Florida

    USGS Publications Warehouse

    O'Reilly, Andrew M.

    2007-01-01

    The transient response of a hydrologic system can be of concern to water-resource managers, because it is often extreme relatively short-lived events, such as floods or droughts, that profoundly influence the management of the resource. The water available to a hydrologic system for stream flow and aquifer recharge is determined by the difference of precipitation and evapotranspiration (ET). As such, temporal variations in precipitation and ET determine the degree of influence each has on the transient response of the hydrologic system. Meteorological, ET, and hydrologic data collected from 1993 to 2003 and spanning 1- to 3 2/3 -year periods were used to develop a hydrologic model for each of five sites in central Florida. The sensitivities of simulated water levels and flows to simple approximations of ET were quantified and the adequacy of each ET approximation was assessed. ET was approximated by computing potential ET, using the Hargreaves and Priestley-Taylor equations, and applying vegetation coefficients to adjust the potential ET values to actual ET. The Hargreaves and Priestley-Taylor ET approximations were used in the calibrated hydrologic models while leaving all other model characteristics and parameter values unchanged. Two primary factors that influence how the temporal variability of ET affects hydrologic simulation in central Florida were identified: (1) stochastic character of precipitation and ET and (2) the ability of the local hydrologic system to attenuate variability in input stresses. Differences in the stochastic character of precipitation and ET, both the central location and spread of the data, result in substantial influence of precipitation on the quantity and timing of water available to the hydrologic system and a relatively small influence of ET. The temporal variability of ET was considerably less than that of precipitation at each site over a wide range of time scales (from daily to annual). However, when precipitation and ET are of similar magnitude, small errors in ET can produce relatively large errors in available water, and accurate estimates of actual ET are more important. Local hydrologic conditions can also be an important factor influencing the hydrologic response to ET variability. Various points along a flow path in a hydrologic system respond differently to temporal variations in ET. For example, soil moisture contents in the root zone are sensitive to daily variations in ET, whereas spring flow responds to only longer term variations in ET. Both the Hargreaves and Priestley-Taylor equations for potential ET, when applied with an annually invariant monthly vegetation coefficient derived from comparison of measured ET with computed potential ET values, can be used with a hydrologic model to produce reasonable predictions of water levels and flows. Baseline-adjusted modified coefficients of efficiency for simulated water levels ranged from 0.0, indicating that water levels were simulated equally as well with approximated ET as with actual ET values, to -0.6, indicating that water levels were simulated better with actual ET values. Simulations using the Hargreaves approximation consistently yielded larger absolute and relative errors than the Priestley-Taylor approximation. However, the differences between the Hargreaves and Priestley-Taylor simulations generally were much smaller than differences between these simulations and the simulations using actual ET. This suggests that the simpler Hargreaves equation may be an adequate substitute for the more complex Priestley-Taylor equation, depending on the level of accuracy required to satisfy the particular modeling objectives.

  10. Changes in Astigmatism, Densitometry, and Aberrations After SMILE for Low to High Myopic Astigmatism: A 12-Month Prospective Study.

    PubMed

    Pedersen, Iben Bach; Ivarsen, Anders; Hjortdal, Jesper

    2017-01-01

    To evaluate 12-month changes in refraction, visual outcome, corneal densitometry, and postoperative aberrations after small incision lenticule extraction (SMILE) for myopic astigmatism. This 12-month prospective clinical trial comprised 101 eyes (101 patients) treated with SMILE for myopic astigmatism with cylinder of 0.75 to 4.00 diopters (D). The preoperative, 1-week, and 1-, 3-, 6-, 9-, and 12-month examinations included measurement of manifest refraction, uncorrected distance visual acuity (UDVA), and corrected (CDVA) distance visual acuity. Astigmatic error vector analysis was performed using Al-pin's method. Densitometry and aberrations were evaluated with Pentacam HR (Oculus Optikgeräte, Wetzlar, Germany). Preoperative spherical equivalent averaged -6.78 ± 1.90 D with 1.81 ± 1.00 D in cylinder correction. After 12 months, 74% and 93% of the eyes were within ±0.50 and ±1.00 D of the attempted refraction, respectively. The logMAR UDVA and CDVA averaged 0.03 ± 0.16 and -0.08 ± 0.09, respectively. Vector analysis showed a with-the-rule undercorrection at 12 months with a mean difference vector of 0.31 D @ 91°. There was a minor counterclockwise rotation of the axis, with an arithmetic angle of error of 0.34° ± 14°. An undercorrection of approximately 11% per diopter of attempted correction was seen at 12 months. Spherical aberrations, coma, and higher order aberrations remained stable during the postoperative period (P < .09). After 12 months, no increase in densitometry could be identified. Treatment of astigmatism with SMILE seems to be predictable and effective, but with an astigmatic undercorrection of approximately 11% and a small counterclockwise rotation of the axis. [J Refract Surg. 2017;33(1):11-17.]. Copyright 2017, SLACK Incorporated.

  11. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less

  12. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    PubMed Central

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis. PMID:25092483

  13. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  14. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  15. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  16. Les systemes approximatifs et l'enseignement des langues secondes (Approximative Systems and the Teaching of Second Languages).

    ERIC Educational Resources Information Center

    High, Virginia Lacastro

    Errors can be considered concrete representations of stages through which one must go in order to acquire one's native language and a second language. It has been discovered that certain errors appear systematically, revealing an approximate system, or "interlanguage," behind the erroneous utterances. Present research in second language…

  17. Channel correlation and BER performance analysis of coherent optical communication systems with receive diversity over moderate-to-strong non-Kolmogorov turbulence.

    PubMed

    Fu, Yulong; Ma, Jing; Tan, Liying; Yu, Siyuan; Lu, Gaoyuan

    2018-04-10

    In this paper, new expressions of the channel-correlation coefficient and its components (the large- and small-scale channel-correlation coefficients) for a plane wave are derived for a horizontal link in moderate-to-strong non-Kolmogorov turbulence using a generalized effective atmospheric spectrum which includes finite-turbulence inner and outer scales and high-wave-number "bump". The closed-form expression of the average bit error rate (BER) of the coherent free-space optical communication system is derived using the derived channel-correlation coefficients and an α-μ distribution to approximate the sum of the square root of arbitrarily correlated Gamma-Gamma random variables. Analytical results are provided to investigate the channel correlation and evaluate the average BER performance. The validity of the proposed approximation is illustrated by Monte Carlo simulations. This work will help with further investigation of the fading correlation in spatial diversity systems.

  18. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  19. Photoproduction of dileptons and photons in p -p collisions at the Large Hadron Collider energies

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Lei; Zhu, Jia-Qing

    2018-03-01

    The production of large pT dileptons and photons originating from photoproduction processes in p-p collisions at Large Hadron Collider energies is calculated. The comparisons between the exact treatment results and the ones of the equivalent photon approximation approach are expressed as the Q2 (the virtuality of photon) and pT distributions. The method developed by Martin and Ryskin is used for avoiding double counting when the coherent and incoherent contributions are considered simultaneously. The numerical results indicate that the equivalent photon approximation is only effective in small Q2 region and can be used for coherent photoproduction processes with proper choice of Qmax2 (the choices Qmax2˜s ^ or ∞ will cause obvious errors), but cannot be used for incoherent photoproduction processes. The exact treatment is needed to deal accurately with the photoproduction of large pT dileptons and photons.

  20. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  1. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    PubMed

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  3. Adjusting for multiple prognostic factors in the analysis of randomised trials

    PubMed Central

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993

  4. Self-Interaction Error in Density Functional Theory: An Appraisal.

    PubMed

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  5. Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning

    NASA Astrophysics Data System (ADS)

    Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.

    2017-12-01

    Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.

  6. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  7. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  8. Local models of astrophysical discs

    NASA Astrophysics Data System (ADS)

    Latter, Henrik N.; Papaloizou, John

    2017-12-01

    Local models of gaseous accretion discs have been successfully employed for decades to describe an assortment of small-scale phenomena, from instabilities and turbulence, to dust dynamics and planet formation. For the most part, they have been derived in a physically motivated but essentially ad hoc fashion, with some of the mathematical assumptions never made explicit nor checked for consistency. This approach is susceptible to error, and it is easy to derive local models that support spurious instabilities or fail to conserve key quantities. In this paper we present rigorous derivations, based on an asympototic ordering, and formulate a hierarchy of local models (incompressible, Boussinesq and compressible), making clear which is best suited for a particular flow or phenomenon, while spelling out explicitly the assumptions and approximations of each. We also discuss the merits of the anelastic approximation, emphasizing that anelastic systems struggle to conserve energy unless strong restrictions are imposed on the flow. The problems encountered by the anelastic approximation are exacerbated by the disc's differential rotation, but also attend non-rotating systems such as stellar interiors. We conclude with a defence of local models and their continued utility in astrophysical research.

  9. Resolution of identity approximation for the Coulomb term in molecular and periodic systems.

    PubMed

    Burow, Asbjörn M; Sierka, Marek; Mohamed, Fawzi

    2009-12-07

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 muhartree per atom, for both molecular and periodic systems.

  10. Resolution of identity approximation for the Coulomb term in molecular and periodic systems

    NASA Astrophysics Data System (ADS)

    Burow, Asbjörn M.; Sierka, Marek; Mohamed, Fawzi

    2009-12-01

    A new formulation of resolution of identity approximation for the Coulomb term is presented, which uses atom-centered basis and auxiliary basis functions and treats molecular and periodic systems of any dimensionality on an equal footing. It relies on the decomposition of an auxiliary charge density into charged and chargeless components. Applying the Coulomb metric under periodic boundary conditions constrains the explicit form of the charged part. The chargeless component is determined variationally and converged Coulomb lattice sums needed for its determination are obtained using chargeless linear combinations of auxiliary basis functions. The lattice sums are partitioned in near- and far-field portions which are treated through an analytical integration scheme employing two- and three-center electron repulsion integrals and multipole expansions, respectively, operating exclusively in real space. Our preliminary implementation within the TURBOMOLE program package demonstrates consistent accuracy of the method across molecular and periodic systems. Using common auxiliary basis sets the errors of the approximation are small, in average about 20 μhartree per atom, for both molecular and periodic systems.

  11. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  12. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  13. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  14. Scattering of point particles by black holes: Gravitational radiation

    NASA Astrophysics Data System (ADS)

    Hopper, Seth; Cardoso, Vitor

    2018-02-01

    Gravitational waves can teach us not only about sources and the environment where they were generated, but also about the gravitational interaction itself. Here we study the features of gravitational radiation produced during the scattering of a pointlike mass by a black hole. Our results are exact (to numerical error) at any order in a velocity expansion, and are compared against various approximations. At large impact parameter and relatively small velocities our results agree to within percent level with various post-Newtonian and weak-field results. Further, we find good agreement with scaling predictions in the weak-field/high-energy regime. Lastly, we achieve striking agreement with zero-frequency estimates.

  15. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G [Lenexa, KS

    2007-08-21

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  16. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G.

    2006-01-10

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  17. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G [Lenexa, KS

    2007-07-17

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  18. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  19. A numerical procedure for recovering true scattering coefficients from measurements with wide-beam antennas

    NASA Technical Reports Server (NTRS)

    Wang, Qinglin; Gogineni, S. P.

    1991-01-01

    A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.

  20. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  1. Alignment error envelopes for single particle analysis.

    PubMed

    Jensen, G J

    2001-01-01

    To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.

  2. On High-Order Radiation Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1995-01-01

    In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.

  3. A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory

    NASA Astrophysics Data System (ADS)

    Stolk, Christiaan C.

    2016-06-01

    We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.

  4. Preliminary Comparisons of the Information Content and Utility of TM Versus MSS Data

    NASA Technical Reports Server (NTRS)

    Markham, B. L.

    1984-01-01

    Comparisons were made between subscenes from the first TM scene acquired of the Washington, D.C. area and a MSS scene acquired approximately one year earlier. Three types of analyses were conducted to compare TM and MSS data: a water body analysis, a principal components analysis and a spectral clustering analysis. The water body analysis compared the capability of the TM to the MSS for detecting small uniform targets. Of the 59 ponds located on aerial photographs 34 (58%) were detected by the TM with six commission errors (15%) and 13 (22%) were detected by the MSS with three commission errors (19%). The smallest water body detected by the TM was 16 meters; the smallest detected by the MSS was 40 meters. For the principal components analysis, means and covariance matrices were calculated for each subscene, and principal components images generated and characterized. In the spectral clustering comparison each scene was independently clustered and the clusters were assigned to informational classes. The preliminary comparison indicated that TM data provides enhancements over MSS in terms of (1) small target detection and (2) data dimensionality (even with 4-band data). The extra dimension, partially resultant from TM band 1, appears useful for built-up/non-built-up area separation.

  5. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  6. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  7. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    PubMed

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.

  8. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  9. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  10. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE PAGES

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; ...

    2016-03-28

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  11. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  12. A method of determining attitude from magnetometer data only

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.; Mclaughlin, S. F.; Nicklas, R. C.

    1990-01-01

    Presented here is a new algorithm to determine attitude using only magnetometer data under the following conditions: (1) internal torques are known and (2) external torques are negligible. Torque-free rotation of a spacecraft in thruster firing acquisition phase and its magnetic despin in the B-dot mode give typical examples of such situations. A simple analytical formula has been derived in the limiting case of a spacecraft rotating with constant angular velocity. The formula has been tested using low-frequency telemetry data for the Earth Radiation Budget Satellite (ERBS) under normal conditions. Observed small oscillation of body-fixed components of the angular velocity vector near their mean values result in relatively minor errors of approximately 5 degrees. More significant errors come from processing digital magnetometer data. Higher resolution of digitized magnetometer measurements would significantly improve the accuracy of this deterministic scheme. Tests of the general version of the developed algorithm for a free-rotating spacecraft and for the B-dot mode are in progress.

  13. A Comparison of seismic instrument noise coherence analysis techniques

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.

    2011-01-01

    The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.

  14. Non-Born-Oppenheimer self-consistent field calculations with cubic scaling

    NASA Astrophysics Data System (ADS)

    Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés

    2012-05-01

    An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.

  15. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: Benchmarks approaching the complete basis set limit

    NASA Astrophysics Data System (ADS)

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-01

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  16. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: benchmarks approaching the complete basis set limit.

    PubMed

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-14

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Moller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  17. First-principles binary diffusion coefficients for H, H 2 and four normal alkanes + N 2

    DOE PAGES

    Jasper, Ahren W.; Kamarchik, Eugene; Miller, James A.; ...

    2014-09-30

    Collision integrals related to binary (dilute gas) diffusion are calculated classically for six species colliding with N 2. The most detailed calculations make no assumptions regarding the complexity of the potential energy surface, and the resulting classical collision integrals are in excellent agreement with previous semiclassical results for H + N 2 and H 2 + N 2 and with recent experimental results for C n H 2n+2 + N 2, n = 2–4. The detailed classical results are used to test the accuracy of three simplifying assumptions typically made when calculating collision integrals: (1) approximating the intermolecular potential asmore » isotropic, (2) neglecting the internal structure of the colliders (i.e., neglecting inelasticity), and (3) employing unphysical R –12 repulsive interactions. The effect of anisotropy is found to be negligible for H + N 2 and H 2 + N 2 (in agreement with previous quantum mechanical and semiclassical results for systems involving atomic and diatomic species) but is more significant for larger species at low temperatures. For example, the neglect of anisotropy decreases the diffusion coefficient for butane + N 2 by 15% at 300 K. The neglect of inelasticity, in contrast, introduces only very small errors. Approximating the repulsive wall as an unphysical R –12 interaction is a significant source of error at all temperatures for the weakly interacting systems H + N 2 and H 2 + N 2, with errors as large as 40%. For the normal alkanes in N 2, which feature stronger interactions, the 12/6 Lennard–Jones approximation is found to be accurate, particularly at temperatures above –700 K where it predicts the full-dimensional result to within 5% (although with somewhat different temperature dependence). Overall, the typical practical approach of assuming isotropic 12/6 Lennard–Jones interactions is confirmed to be suitable for combustion applications except for weakly interacting systems, such as H + N 2. For these systems, anisotropy and inelasticity can safely be neglected but a more detailed description of the repulsive wall is required for quantitative predictions. Moreover, a straightforward approach for calculating effective isotropic potentials with realistic repulsive walls is described. An analytic expression for the calculated diffusion coefficient for H + N 2 is presented and is estimated to have a 2-sigma error bar of only 0.7%.« less

  18. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

  19. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  20. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...

    2017-10-24

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  1. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  2. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  3. A comparison of zero-order, first-order, and monod biotransformation models

    USGS Publications Warehouse

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.

  4. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  5. Early-time solution of the horizontal unconfined aquifer in the build-up phase

    NASA Astrophysics Data System (ADS)

    Gravanis, Elias; Akylas, Evangelos

    2017-04-01

    The Boussinesq equation is a dynamical equation for the free surface of saturated subsurface flows over an impervious bed. Boussinesq equation is non-linear. The non-linearity comes from the reduction of the dimensionality of the problem: The flow is assumed to be vertically homogeneous, therefore the flow rate through a cross section of the flow is proportional to the free surface height times the hydraulic gradient, which is assumed to be equal to the slope of the free surface (Dupuit approximation). In general, 'vertically' means normally on the bed; combining the Dupuit approximation with the continuity equation leads to the Boussinesq equation. There are very few transient exact solutions. Self- similar solutions have been constructed in the past by various authors. A power series type of solution was derived for a self-similar Boussinesq equation by Barenblatt in 1990. That type of solution has generated a certain amount of literature. For the unconfined flow case for zero recharge rate Boussinesq derived for the horizontal aquifer an exact solution assuming separation of variables. This is actually an exact asymptotic solution of the horizontal aquifer recession phase for late times. The kinematic wave is an interesting solution obtained by dropping the non-linear term in the Boussinesq equation. Although it is an approximate solution, and holds well only for small values of the Henderson and Wooding λ parameter (that is, for steep slopes, high conductivity or small recharge rate), it becomes less and less approximate for smaller values of the parameter, that is, it is asymptotically exact with respect to that parameter. In the present work we consider the case of the unconfined subsurface flow over horizontal bed in the build-up phase under constant recharge rate. This is a case with an infinite Henderson and Wooding parameter, that is, it is the limiting case where the non-linear term is present in the Boussinesq while the linear spatial derivative term goes away. Nonetheless, no analogue of the kinematic wave or the Boussinesq separable solution exists in this case. The late time state of the build-up phase under constant recharge rate is very simply the steady state solution. Our aim is to construct the early time asymptotic solution of this problem. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turn out to be asymptotic and it is regularized by re-summation techniques which are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.

  6. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  7. Electrostatically Embedded Many-Body Approximation for Systems of Water, Ammonia, and Sulfuric Acid and the Dependence of Its Performance on Embedding Charges.

    PubMed

    Leverentz, Hannah R; Truhlar, Donald G

    2009-06-09

    This work tests the capability of the electrostatically embedded many-body (EE-MB) method to calculate accurate (relative to conventional calculations carried out at the same level of electronic structure theory and with the same basis set) binding energies of mixed clusters (as large as 9-mers) consisting of water, ammonia, sulfuric acid, and ammonium and bisulfate ions. This work also investigates the dependence of the accuracy of the EE-MB approximation on the type and origin of the charges used for electrostatically embedding these clusters. The conclusions reached are that for all of the clusters and sets of embedding charges studied in this work, the electrostatically embedded three-body (EE-3B) approximation is capable of consistently yielding relative errors of less than 1% and an average relative absolute error of only 0.3%, and that the performance of the EE-MB approximation does not depend strongly on the specific set of embedding charges used. The electrostatically embedded pairwise approximation has errors about an order of magnitude larger than EE-3B. This study also explores the question of why the accuracy of the EE-MB approximation shows such little dependence on the types of embedding charges employed.

  8. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  9. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  10. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  11. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  12. A Small-Scale Comparison of Iceland Scallop Size Distributions Obtained from a Camera Based Autonomous Underwater Vehicle and Dredge Survey

    PubMed Central

    Singh, Warsha; Örnólfsdóttir, Erla B.; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was and deg that resulted in error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately cm was seen, which could be attributed to pixel error, where each pixel represented cm. After correcting for this difference the estimated heights ranged from cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region. PMID:25303243

  13. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    PubMed

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  14. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  15. Automated estimation of abdominal effective diameter for body size normalization of CT dose.

    PubMed

    Cheng, Phillip M

    2013-06-01

    Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.

  16. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  17. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  18. Error-Related Brain Activity in Young Children: Associations with Parental Anxiety and Child Temperamental Negative Emotionality

    ERIC Educational Resources Information Center

    Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.

    2013-01-01

    Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…

  19. Quadriphase DS-CDMA wireless communication systems employing the generalized detector

    NASA Astrophysics Data System (ADS)

    Tuzlukov, Vyacheslav

    2012-05-01

    Probability of bit-error Per performance of asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless communication systems employing the generalized detector (GD) constructed based on the generalized approach to signal processing in noise is analyzed. The effects of pulse shaping, quadriphase or direct sequence quadriphase shift keying (DS-QPSK) spreading, aperiodic spreading sequences are considered in DS-CDMA based on GD and compared with the coherent Neyman-Pearson receiver. An exact Per expression and several approximations: one using the characterristic function method, a simplified expression for the improved Gaussian approximation (IGA) and the simplified improved Gaussian approximation are derived. Under conditions typically satisfied in practice and even with a small number of interferers, the standard Gaussian approximation (SGA) for the multiple-access interference component of the GD statistic and Per performance is shown to be accurate. Moreover, the IGA is shown to reduce to the SGA for pulses with zero excess bandwidth. Second, the GD Per performance of quadriphase DS-CDMA is shown to be superior to that of bi-phase DS-CDMA. Numerical examples by Monte Carlo simulation are presented to illustrate the GD Per performance for square-root raised-cosine pulses and spreading factors of moderate to large values. Also, a superiority of GD employment in CDMA systems over the Neyman-Pearson receiver is demonstrated

  20. An Incompressible, Depth-Averaged Lattice Boltzmann Method for Liquid Flow in Microfluidic Devices with Variable Aperture

    DOE PAGES

    Laleian, Artin; Valocchi, Albert J.; Werth, Charles J.

    2015-11-24

    Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous dragmore » with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. Additionally, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.« less

  1. Land Surface Reflectance Retrieval from Hyperspectral Data Collected by an Unmanned Aerial Vehicle over the Baotou Test Site

    PubMed Central

    Duan, Si-Bo; Li, Zhao-Liang; Tang, Bo-Hui; Wu, Hua; Ma, Lingling; Zhao, Enyu; Li, Chuanrong

    2013-01-01

    To evaluate the in-flight performance of a new hyperspectral sensor onboard an unmanned aerial vehicle (UAV-HYPER), a comprehensive field campaign was conducted over the Baotou test site in China on 3 September 2011. Several portable reference reflectance targets were deployed across the test site. The radiometric performance of the UAV-HYPER sensor was assessed in terms of signal-to-noise ratio (SNR) and the calibration accuracy. The SNR of the different bands of the UAV-HYPER sensor was estimated to be between approximately 5 and 120 over the homogeneous targets, and the linear response of the apparent reflectance ranged from approximately 0.05 to 0.45. The uniform and non-uniform Lambertian land surface reflectance was retrieved and validated using in situ measurements, with root mean square error (RMSE) of approximately 0.01–0.07 and relative RMSE of approximately 5%–12%. There were small discrepancies between the retrieved uniform and non-uniform Lambertian land surface reflectance over the homogeneous targets and under low aerosol optical depth (AOD) conditions (AOD = 0.18). However, these discrepancies must be taken into account when adjacent pixels had large land surface reflectance contrast and under high AOD conditions (e.g. AOD = 1.0). PMID:23785513

  2. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

  3. Estimation of spin contamination error in dissociative adsorption of Au2 onto MgO(0 0 1) surface: First application of approximate spin projection (AP) method to plane wave basis

    NASA Astrophysics Data System (ADS)

    Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo

    2018-06-01

    Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.

  4. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  5. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  7. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  8. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  9. A Study of the Errors of the Fixed-Node Approximation in Diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Rasch, Kevin M.

    Quantum Monte Carlo techniques stochastically evaluate integrals to solve the many-body Schrodinger equation. QMC algorithms scale favorably in the number of particles simulated and enjoy applicability to a wide range of quantum systems. Advances in the core algorithms of the method and their implementations paired with the steady development of computational assets have carried the applicability of QMC beyond analytically treatable systems, such as the Homogeneous Electron Gas, and have extended QMC's domain to treat atoms, molecules, and solids containing as many as several hundred electrons. FN-DMC projects out the ground state of a wave function subject to constraints imposed by our ansatz to the problem. The constraints imposed by the fixed-node Approximation are poorly understood. One key step in developing any scientific theory or method is to qualify where the theory is inaccurate and to quantify how erroneous it is under these circumstances. I investigate the fixed-node errors as they evolve over changing charge density, system size, and effective core potentials. I begin by studying a simple system for which the nodes of the trial wave function can be solved almost exactly. By comparing two trial wave functions, a single determinant wave function flawed in a known way and a nearly exact wave function, I show that the fixed-node error increases when the charge density is increased. Next, I investigate a sequence of Lithium systems increasing in size from a single atom, to small molecules, up to the bulk metal form. Over these systems, FN-DMC calculations consistently recover 95% or more of the correlation energy of the system. Given this accuracy, I make a prediction for the binding energy of Li4 molecule. Last, I turn to analyzing the fixed-node error in first and second row atoms and their molecules. With the appropriate pseudo-potentials, these systems are iso-electronic, show similar geometries and states. One would expect with identical number of particles involved in the calculation, errors in the respective total energies of the two iso-electronic species would be quite similar. I observe, instead, that the first row atoms and their molecules have errors larger by twice or more in size. I identify a cause for this difference in iso-electronic species. The fixed-node errors in all of these cases are calculated by careful comparison to experimental results, showing that FN-DMC to be a robust tool for understanding quantum systems and also a method for new investigations into the nature of many-body effects.

  10. An analysis of the massless planet approximation in transit light curve models

    NASA Astrophysics Data System (ADS)

    Millholland, Sarah; Ruch, Gerry

    2015-08-01

    Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.

  11. Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket

    NASA Astrophysics Data System (ADS)

    Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.

    2016-10-01

    Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a quantum processor with a (10 ×10 )-qubit lattice, which would allow for the realization of a simple quantum memory.

  12. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  13. A finite state projection algorithm for the stationary solution of the chemical master equation.

    PubMed

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-21

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.

  14. A finite state projection algorithm for the stationary solution of the chemical master equation

    NASA Astrophysics Data System (ADS)

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-01

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.

  15. A Sensitivity Analysis of Circular Error Probable Approximation Techniques

    DTIC Science & Technology

    1992-03-01

    SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some

  16. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  17. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  18. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  19. Higgs production via gluon fusion in k{sub T} factorisation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hautmann, F.; Jung, H.; Pandis, V.

    2011-07-15

    Theoretical studies of Higgs production via gluon fusion are frequently carried out in the limit where the top quark mass is much larger than the Higgs mass, an approximation which reduces the top quark loop to an effective vertex. We present a numerical analysis of the error thus introduced by performing a Monte Carlo calculation for gg{yields}h in k{sub T}-factorisation, using the parton shower generator CASCADE. By examining both inclusive and exclusive quantities, we find that retaining the top-mass dependence results in only a small enhancement of the cross-section. We then proceed to compare CASCADE to the collinear Monte Carlosmore » PYTHIA, MC-NLO and POWHEG.« less

  20. Asymptotic boundary conditions for dissipative waves: General theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.

  1. A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies.

    PubMed

    Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel

    2016-10-01

    Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.

  2. Scattering from binary optics

    NASA Technical Reports Server (NTRS)

    Ricks, Douglas W.

    1993-01-01

    There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.

  3. A fully-nonlocal energy-based formulation and high-performance realization of the quasicontinuum method

    NASA Astrophysics Data System (ADS)

    Amelang, Jeff

    The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.

  4. Equilibrium configurations of the conducting liquid surface in a nonuniform electric field

    NASA Astrophysics Data System (ADS)

    Zubarev, N. M.; Zubareva, O. V.

    2011-01-01

    Possible equilibrium configurations of the free surface of a conducting liquid deformed by a nonuniform external electric field are investigated. The liquid rests on an electrode that has the shape of a dihedral angle formed by two intersecting equipotential half-planes (conducting wedge). It is assumed that the problem has plane symmetry: the surface is invariant under shift along the edge of the dihedral angle. A one-parametric family of exact solutions for the shape of the surface is found in which the opening angle of the region above the wedge serves as a parameter. The solutions are valid when the pressure difference between the inside and outside of the liquid is zero. For an arbitrary pressure difference, approximate solutions to the problem are constructed and it is demonstrated the approximation error is small. It is found that, when the potential difference exceeds a certain threshold value, equilibrium solutions are absent. In this case, the region occupied by the liquid disintegrates, the disintegration scenario depending on the opening angle.

  5. Analysis of remote sensing data for evaluation of vegetation resources

    NASA Technical Reports Server (NTRS)

    1970-01-01

    Research has centered around: (1) completion of a study on the use of remote sensing techniques as an aid to multiple use management; (2) determination of the information transfer at various image resolution levels for wildland areas; and (3) determination of the value of small scale multiband, multidate photography for the analysis of vegetation resources. In addition, a substantial effort was made to upgrade the automatic image classification and spectral signature acquisition capabilities of the laboratory. It was found that: (1) Remote sensing techniques should be useful in multiple use management to provide a first-cut analysis of an area. (2) Imagery with 400-500 feet ground resolvable distance (GRD), such as that expected from ERTS-1, should allow discriminations to be made between woody vegetation, grassland, and water bodies with approximately 80% accuracy. (3) Barley and wheat acreages in Maricopa County, Arizona could be estimated with acceptable accuracies using small scale multiband, multidate photography. Sampling errors for acreages of wheat, barley, small grains (wheat and barley combined), and all cropland were 13%, 11%, 8% and 3% respectively.

  6. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  7. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  8. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  9. Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data.

    PubMed

    Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo

    2018-06-25

    The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.

  10. Improved model for correcting the ionospheric impact on bending angle in radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Angling, Matthew J.; Elvidge, Sean; Healy, Sean B.

    2018-04-01

    The standard approach to remove the effects of the ionosphere from neutral atmosphere GPS radio occultation measurements is to estimate a corrected bending angle from a combination of the L1 and L2 bending angles. This approach is known to result in systematic errors and an extension has been proposed to the standard ionospheric correction that is dependent on the squared L1 / L2 bending angle difference and a scaling term (κ). The variation of κ with height, time, season, location and solar activity (i.e. the F10.7 flux) has been investigated by applying a 1-D bending angle operator to electron density profiles provided by a monthly median ionospheric climatology model. As expected, the residual bending angle is well correlated (negatively) with the vertical total electron content (TEC). κ is more strongly dependent on the solar zenith angle, indicating that the TEC-dependent component of the residual error is effectively modelled by the squared L1 / L2 bending angle difference term in the correction. The residual error from the ionospheric correction is likely to be a major contributor to the overall error budget of neutral atmosphere retrievals between 40 and 80 km. Over this height range κ is approximately linear with height. A simple κ model has also been developed. It is independent of ionospheric measurements, but incorporates geophysical dependencies (i.e. solar zenith angle, solar flux, altitude). The global mean error (i.e. bias) and the standard deviation of the residual errors are reduced from -1.3×10-8 and 2.2×10-8 for the uncorrected case to -2.2×10-10 rad and 2.0×10-9 rad, respectively, for the corrections using the κ model. Although a fixed scalar κ also reduces bias for the global average, the selected value of κ (14 rad-1) is only appropriate for a small band of locations around the solar terminator. In the daytime, the scalar κ is consistently too high and this results in an overcorrection of the bending angles and a positive bending angle bias. Similarly, in the nighttime, the scalar κ is too low. However, in this case, the bending angles are already small and the impact of the choice of κ is less pronounced.

  11. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  12. On position/force tracking control problem of cooperative robot manipulators using adaptive fuzzy backstepping approach.

    PubMed

    Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz

    2017-09-01

    In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  14. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  15. Predicting crystalline lens fall caused by accommodation from changes in wavefront error

    PubMed Central

    He, Lin; Applegate, Raymond A.

    2011-01-01

    PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108

  16. An improved error assessment for the GEM-T1 gravitational model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.

    1988-01-01

    Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.

  17. On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.

    2011-01-01

    Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.

  18. On the dynamics of approximating schemes for dissipative nonlinear equations

    NASA Technical Reports Server (NTRS)

    Jones, Donald A.

    1993-01-01

    Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.

  19. Optical Testing of Diamond Machined, Aspheric Mirrors for Groundbased, Near-IR Astronomy

    NASA Technical Reports Server (NTRS)

    Chambers, V. John; Mink, Ronald G.; Ohl, Raymond G.; Connelly, Joseph A.; Mentzell, J. Eric; Arnold, Steven M.; Greenhouse, Matthew A.; Winsor, Robert S.; MacKenty, John W.

    2002-01-01

    The Infrared Multi-Object Spectrometer (IRMOS) is a facility-class instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8-2.5 micron) spectrometer and operates at approximately 80 K. The 6061-T651 aluminum bench and mirrors constitute an athermal design. The instrument produces simultaneous spectra at low- to mid-resolving power (R=lambda/delta lambda= 300-3000) of approximately 100 objects in its 2.8 x 2.0 arcmin field. We describe ambient and cryogenic optical testing of the IRMOS mirrors across a broad range in spatial frequency (figure error, mid-frequency error, and microroughness). The mirrors include three rotationally symmetric, off-axis conic sections, one off-axis biconic, and several flat fold mirrors. The symmetric mirrors include convex and concave prolate and oblate ellipsoids. They range in aperture from 94x86 mm to 286x269 mm and in f-number from 0.9 to 2.4. The biconic mirror is concave and has a 94x76 mm aperture, R(sub x)=377 mm, k(sub x)=0.0778, R(sub y)=407 mm, and k(sub y)=0.1265 and is decentered by -2 mm in X and 227 mm in Y. All of the mirrors have an aspect ratio of approximately 6:1. The surface error fabrication tolerances are less than 10 nm RMS microroughness, 'best effort' for mid-frequency error, and less than 63.3 nm RMS figure error. Ambient temperature (approximately 293 K) testing is performed for each of the three surface error regimes, and figure testing is also performed at approximately 80 K. Operation of the ADE Phaseshift MicroXAM white light interferometer (micro-roughness) and the Bauer Model 200 profilometer (mid-frequency error) is described. Both the sag and conic values of the aspheric mirrors make these tests challenging. Figure testing is performed using a Zygo GPI interferometer, custom computer generated holograms (CGH), and optomechanical alignment fiducials. Cryogenic CGH null testing is discussed in detail. We discuss complications such as the change in prescription with temperature and thermal gradients. Correction for the effect of the dewar window is also covered. We discuss the error budget for the optical test and alignment procedure. Data reduction is accomplished using commercial optical design and data analysis software packages. Results from CGH testing at cryogenic temperatures are encouraging thus far.

  20. Impact of spot charge inaccuracies in IMPT treatments.

    PubMed

    Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M

    2017-08-01

    Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.

  1. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  2. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  3. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  4. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  5. ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14

    NASA Technical Reports Server (NTRS)

    Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.

    1997-01-01

    The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.

  6. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  7. A refinement of the combination equations for evaporation

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.

  8. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  9. Padé Approximant and Minimax Rational Approximation in Standard Cosmology

    NASA Astrophysics Data System (ADS)

    Zaninetti, Lorenzo

    2016-02-01

    The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.

  10. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  11. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  12. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  13. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  14. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  15. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  16. A family of approximate solutions and explicit error estimates for the nonlinear stationary Navier-Stokes problem

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Karel, S.

    1975-01-01

    An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.

  17. Composite Gauss-Legendre Quadrature with Error Control

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  18. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.

    2007-01-01

    Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.

  20. A priori error estimates for an hp-version of the discontinuous Galerkin method for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. Tinsley

    1993-01-01

    A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.

  1. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  2. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  4. Magnetic Resonance Imaging Distortion and Targeting Errors from Strong Rare Earth Metal Magnetic Dental Implant Requiring Revision.

    PubMed

    Seong-Cheol, Park; Chong Sik, Lee; Seok Min, Kim; Eu Jene, Choi; Do Hee, Lee; Jung Kyo, Lee

    2016-12-22

    Recently, the use of magnetic dental implants has been re-popularized with the introduction of strong rare earth metal, for example, neodymium, magnets. Unrecognized magnetic dental implants can cause critical magnetic resonance image distortions. We report a case involving surgical failure caused by a magnetic dental implant. A 62-year-old man underwent deep brain stimulation for medically insufficiently controlled Parkinson's disease. Stereotactic magnetic resonance imaging performed for the first deep brain stimulation showed that the overdenture was removed. However, a dental implant remained and contained a neodymium magnet, which was unrecognized at the time of imaging; the magnet caused localized non-linear distortions that were the largest around the dental magnets. In the magnetic field, the subthalamic area was distorted by a 4.6 mm right shift and counter clockwise rotation. However, distortions were visually subtle in the operation field and small for distant stereotactic markers, with approximately 1-2 mm distortions. The surgeon considered the distortion to be normal asymmetry or variation. Stereotactic marker distortion was calculated to be in the acceptable range in the surgical planning software. Targeting errors, approximately 5 mm on the right side and 2 mm on the left side, occurred postoperatively. Both leads were revised after the removal of dental magnets. Dental magnets may cause surgical failures and should be checked and removed before stereotactic surgery. Our findings should be considered when reviewing surgical precautions and making distortion-detection algorithm improvements.

  5. Response effects in the perception of conjunctions of colour and form.

    PubMed

    Chmiel, N

    1989-01-01

    Two experiments addressed the question whether visual search for a target defined by a conjunction of colour and form requires a central, serial, attentional process, but detection of a single feature, such as colour, is preattentive, as proposed by the feature-integration theory of attention. Experiment 1 investigated conjunction and feature search using small array sizes of up to five elements, under conditions which precluded eye-movements, in contrast to previous studies. The results were consistent with the theory. Conjunction search showed the effect of adding distractors to the display, the slopes of the curves relating RT to array size were in the approximate ratio of 2:1, consistent with a central, serial search process, exhaustive for absence responses and self-terminating for presence responses. Feature search showed no significant effect of distractors for presence responses. Experiment 2 manipulated the response requirements in conjunction search, using vocal response in a GO-NO GO procedure, in contrast to Experiment 1, which used key-press responses in a YES-NO procedure. Strikingly, presence-response RT was not affected significantly by the number of distractors in the array. The slope relating RT to array size was 3.92. The absence RT slope was 30.56, producing a slope ratio of approximately 8:1. There was no interaction of errors with array size and the presence and absence conditions, implying that RT-error trade-offs did not produce this slope ratio. This result suggests that feature-integration theory is at least incomplete.

  6. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  7. Gigantic Circular Shock Acoustic Waves in the Ionosphere Triggered by the Launch of FORMOSAT-5 Satellite

    NASA Astrophysics Data System (ADS)

    Chou, Min-Yang; Shen, Ming-Hsueh; Lin, Charles C. H.; Yue, Jia; Chen, Chia-Hung; Liu, Jann-Yenq; Lin, Jia-Ting

    2018-02-01

    The launch of SpaceX Falcon 9 rocket delivered Taiwan's FORMOSAT-5 satellite to orbit from Vandenberg Air Force Base in California at 18:51:00 UT on 24 August 2017. To facilitate the delivery of FORMOSAT-5 to its mission orbit altitude of 720 km, the Falcon 9 made a steep initial ascent. During the launch, the supersonic rocket induced gigantic circular shock acoustic waves (SAWs) in total electron content (TEC) over the western United States beginning approximately 5 min after the liftoff. The circular SAWs emanated outward with 20 min duration, horizontal phase velocities of 629-726 m/s, horizontal wavelengths of 390-450 km, and period of 10.28 ± 1 min. This is the largest rocket-induced circular SAWs on record, extending approximately 114-128°W in longitude and 26-39°N in latitude ( 1,500 km in diameter), and was due to the unique, nearly vertical attitude of the rocket during orbit insertion. The rocket-exhaust plume subsequently created a large-scale ionospheric plasma hole ( 900 km in diameter) with 10-70% TEC depletions in comparison with the reference days. While the circular SAWs, with a relatively small amplitude of TEC fluctuations, likely did not introduce range errors into the Global Navigation Satellite Systems navigation and positioning system, the subsequent ionospheric plasma hole, on the other hand, could have caused spatial gradients in the ionospheric plasma potentially leading to a range error of 1 m.

  8. Calibration of a stack of NaI scintillators at the Berkeley Bevalac

    NASA Technical Reports Server (NTRS)

    Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.

    1983-01-01

    An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.

  9. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  10. Equivalent model optimization with cyclic correction approximation method considering parasitic effect for thermoelectric coolers.

    PubMed

    Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi

    2017-11-21

    As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.

  11. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  12. Adaptive neural network decentralized backstepping output-feedback control for nonlinear large-scale systems with time delays.

    PubMed

    Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang

    2011-07-01

    In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.

  13. The velocity and vorticity fields of the turbulent near wake of a circular cylinder

    NASA Technical Reports Server (NTRS)

    Wallace, James; Ong, Lawrence; Moin, Parviz

    1995-01-01

    The purpose of this research is to provide a detailed experimental database of velocity and vorticity statistics in the very near wake (x/d less than 10) of a circular cylinder at Reynolds number of 3900. This study has determined that estimations of the streamwise velocity component in flow fields with large nonzero cross-stream components are not accurate. Similarly, X-wire measurements of the u and v velocity components in flows containing large w are also subject to the errors due to binormal cooling. Using the look-up table (LUT) technique, and by calibrating the X-wire probe used here to include the range of expected angles of attack (+/- 40 deg), accurate X-wire measurements of instantaneous u and v velocity components in the very near wake region of a circular cylinder has been accomplished. The approximate two-dimensionality of the present flow field was verified with four-wire probe measurements, and to some extent the spanwise correlation measurements with the multisensor rake. Hence, binormal cooling errors in the present X-wire measurements are small.

  14. Radar detection with the Neyman-Pearson criterion using supervised-learning-machines trained with the cross-entropy error

    NASA Astrophysics Data System (ADS)

    Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel

    2013-12-01

    The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.

  15. The Dipole Segment Model for Axisymmetrical Elongated Asteroids

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong

    2018-02-01

    Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.

  16. E2 and SN2 Reactions of X(-) + CH3CH2X (X = F, Cl); an ab Initio and DFT Benchmark Study.

    PubMed

    Bento, A Patrícia; Solà, Miquel; Bickelhaupt, F Matthias

    2008-06-01

    We have computed consistent benchmark potential energy surfaces (PESs) for the anti-E2, syn-E2, and SN2 pathways of X(-) + CH3CH2X with X = F and Cl. This benchmark has been used to evaluate the performance of 31 popular density functionals, covering local-density approximation, generalized gradient approximation (GGA), meta-GGA, and hybrid density-functional theory (DFT). The ab initio benchmark has been obtained by exploring the PESs using a hierarchical series of ab initio methods [up to CCSD(T)] in combination with a hierarchical series of Gaussian-type basis sets (up to aug-cc-pVQZ). Our best CCSD(T) estimates show that the overall barriers for the various pathways increase in the order anti-E2 (X = F) < SN2 (X = F) < SN2 (X = Cl) ∼ syn-E2 (X = F) < anti-E2 (X = Cl) < syn-E2 (X = Cl). Thus, anti-E2 dominates for F(-) + CH3CH2F, and SN2 dominates for Cl(-) + CH3CH2Cl, while syn-E2 is in all cases the least favorable pathway. Best overall agreement with our ab initio benchmark is obtained by representatives from each of the three categories of functionals, GGA, meta-GGA, and hybrid DFT, with mean absolute errors in, for example, central barriers of 4.3 (OPBE), 2.2 (M06-L), and 2.0 kcal/mol (M06), respectively. Importantly, the hybrid functional BHandH and the meta-GGA M06-L yield incorrect trends and qualitative features of the PESs (in particular, an erroneous preference for SN2 over the anti-E2 in the case of F(-) + CH3CH2F) even though they are among the best functionals as measured by their small mean absolute errors of 3.3 and 2.2 kcal/mol in reaction barriers. OLYP and B3LYP have somewhat higher mean absolute errors in central barriers (5.6 and 4.8 kcal/mol, respectively), but the error distribution is somewhat more uniform, and as a consequence, the correct trends are reproduced.

  17. Subsystem density functional theory with meta-generalized gradient approximation exchange-correlation functionals.

    PubMed

    Śmiga, Szymon; Fabiano, Eduardo; Laricchia, Savio; Constantin, Lucian A; Della Sala, Fabio

    2015-04-21

    We analyze the methodology and the performance of subsystem density functional theory (DFT) with meta-generalized gradient approximation (meta-GGA) exchange-correlation functionals for non-bonded molecular systems. Meta-GGA functionals depend on the Kohn-Sham kinetic energy density (KED), which is not known as an explicit functional of the density. Therefore, they cannot be directly applied in subsystem DFT calculations. We propose a Laplacian-level approximation to the KED which overcomes this limitation and provides a simple and accurate way to apply meta-GGA exchange-correlation functionals in subsystem DFT calculations. The so obtained density and energy errors, with respect to the corresponding supermolecular calculations, are comparable with conventional approaches, depending almost exclusively on the approximations in the non-additive kinetic embedding term. An embedding energy error decomposition explains the accuracy of our method.

  18. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  19. Analysis of DSN software anomalies

    NASA Technical Reports Server (NTRS)

    Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.

    1981-01-01

    A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.

  20. Development of a Germanium Small-Animal SPECT System

    NASA Astrophysics Data System (ADS)

    Johnson, Lindsay C.; Ovchinnikov, Oleg; Shokouhi, Sepideh; Peterson, Todd E.

    2015-10-01

    Advances in fabrication techniques, electronics, and mechanical cooling systems have given rise to germanium detectors suitable for biomedical imaging. We are developing a small-animal SPECT system that uses a double-sided Ge strip detector. The detector's excellent energy resolution may help to reduce scatter and simplify processing of multi-isotope imaging, while its ability to measure depth of interaction has the potential to mitigate parallax error in pinhole imaging. The detector's energy resolution is <; 1% FWHM at 140 keV and its spatial resolution is approximately 1.5 mm FWHM. The prototype system described has a single-pinhole collimator with a 1-mm diameter and a 70-degree opening angle with a focal length variable between 4.5 and 9 cm. Phantom images from the gantry-mounted system are presented, including the NEMA NU-2008 phantom and a hot-rod phantom. Additionally, the benefit of energy resolution is demonstrated by imaging a dual-isotope phantom with 99mTc and 123I without cross-talk correction.

  1. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  2. How long will asteroids on retrograde orbits survive?

    NASA Astrophysics Data System (ADS)

    Kankiewicz, Paweł; Włodarczyk, Ireneusz

    2018-05-01

    Generally, a common scenario for the origin of minor planets with high orbital inclinations does not exist. This applies especially to objects whose orbital inclinations are much greater than 90° (retrograde asteroids). Since the discovery of Dioretsa in 1999, approximately 100 small bodies now are classified as retrograde asteroids. A small number of them were reclassified as comets, due to cometary activity. There are only 25 multi-opposition retrograde asteroids, with a relatively large number of observations and well-determined orbits. We studied the orbital evolution of numbered and multi-opposition retrograde asteroids by numerical integration up to 1 Gy forward and backward in time. Additionally, we analyzed the propagation of orbital elements with the observational errors, determined dynamical lifetimes and studied their chaotic properties. Conclusively, we obtained quantitative parameters describing the long-term stability of orbits relating to the past and the future. In turn, we were able to estimate their lifetimes and how long these objects will survive in the Solar System.

  3. Gyrokinetic neoclassical study of the bootstrap current in the tokamak edge pedestal with fully non-linear Coulomb collisions

    DOE PAGES

    Hager, Robert; Chang, C. S.

    2016-04-08

    As a follow-up on the drift-kinetic study of the non-local bootstrap current in the steep edge pedestal of tokamak plasma by Koh et al. [Phys. Plasmas 19, 072505 (2012)], a gyrokinetic neoclassical study is performed with gyrokinetic ions and drift-kinetic electrons. Besides the gyrokinetic improvement of ion physics from the drift-kinetic treatment, a fully non-linear Fokker-Planck collision operator—that conserves mass, momentum, and energy—is used instead of Koh et al.'s linearized collision operator in consideration of the possibility that the ion distribution function is non-Maxwellian in the steep pedestal. An inaccuracy in Koh et al.'s result is found in the steepmore » edge pedestal that originated from a small error in the collisional momentum conservation. The present study concludes that (1) the bootstrap current in the steep edge pedestal is generally smaller than what has been predicted from the small banana-width (local) approximation [e.g., Sauter et al., Phys. Plasmas 6, 2834 (1999) and Belli et al., Plasma Phys. Controlled Fusion 50, 095010 (2008)], (2) the plasma flow evaluated from the local approximation can significantly deviate from the non-local results, and (3) the bootstrap current in the edge pedestal, where the passing particle region is small, can be dominantly carried by the trapped particles in a broad trapped boundary layer. In conclusion, a new analytic formula based on numerous gyrokinetic simulations using various magnetic equilibria and plasma profiles with self-consistent Grad-Shafranov solutions is constructed.« less

  4. Gyrokinetic neoclassical study of the bootstrap current in the tokamak edge pedestal with fully non-linear Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert; Chang, C. S.

    As a follow-up on the drift-kinetic study of the non-local bootstrap current in the steep edge pedestal of tokamak plasma by Koh et al. [Phys. Plasmas 19, 072505 (2012)], a gyrokinetic neoclassical study is performed with gyrokinetic ions and drift-kinetic electrons. Besides the gyrokinetic improvement of ion physics from the drift-kinetic treatment, a fully non-linear Fokker-Planck collision operator—that conserves mass, momentum, and energy—is used instead of Koh et al.'s linearized collision operator in consideration of the possibility that the ion distribution function is non-Maxwellian in the steep pedestal. An inaccuracy in Koh et al.'s result is found in the steepmore » edge pedestal that originated from a small error in the collisional momentum conservation. The present study concludes that (1) the bootstrap current in the steep edge pedestal is generally smaller than what has been predicted from the small banana-width (local) approximation [e.g., Sauter et al., Phys. Plasmas 6, 2834 (1999) and Belli et al., Plasma Phys. Controlled Fusion 50, 095010 (2008)], (2) the plasma flow evaluated from the local approximation can significantly deviate from the non-local results, and (3) the bootstrap current in the edge pedestal, where the passing particle region is small, can be dominantly carried by the trapped particles in a broad trapped boundary layer. In conclusion, a new analytic formula based on numerous gyrokinetic simulations using various magnetic equilibria and plasma profiles with self-consistent Grad-Shafranov solutions is constructed.« less

  5. Gyrokinetic neoclassical study of the bootstrap current in the tokamak edge pedestal with fully non-linear Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert, E-mail: rhager@pppl.gov; Chang, C. S., E-mail: cschang@pppl.gov

    As a follow-up on the drift-kinetic study of the non-local bootstrap current in the steep edge pedestal of tokamak plasma by Koh et al. [Phys. Plasmas 19, 072505 (2012)], a gyrokinetic neoclassical study is performed with gyrokinetic ions and drift-kinetic electrons. Besides the gyrokinetic improvement of ion physics from the drift-kinetic treatment, a fully non-linear Fokker-Planck collision operator—that conserves mass, momentum, and energy—is used instead of Koh et al.'s linearized collision operator in consideration of the possibility that the ion distribution function is non-Maxwellian in the steep pedestal. An inaccuracy in Koh et al.'s result is found in the steepmore » edge pedestal that originated from a small error in the collisional momentum conservation. The present study concludes that (1) the bootstrap current in the steep edge pedestal is generally smaller than what has been predicted from the small banana-width (local) approximation [e.g., Sauter et al., Phys. Plasmas 6, 2834 (1999) and Belli et al., Plasma Phys. Controlled Fusion 50, 095010 (2008)], (2) the plasma flow evaluated from the local approximation can significantly deviate from the non-local results, and (3) the bootstrap current in the edge pedestal, where the passing particle region is small, can be dominantly carried by the trapped particles in a broad trapped boundary layer. A new analytic formula based on numerous gyrokinetic simulations using various magnetic equilibria and plasma profiles with self-consistent Grad-Shafranov solutions is constructed.« less

  6. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  7. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  8. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  9. Towards more reliable automated multi-dose dispensing: retrospective follow-up study on medication dose errors and product defects.

    PubMed

    Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko

    2013-03-01

    To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.

  10. Study of a Solar Sensor for use in Space Vehicle Orientation Control Systems

    NASA Technical Reports Server (NTRS)

    Spencer, Paul R.

    1961-01-01

    The solar sensor described herein may be used for a variety of space operations requiring solar orientation. The use of silicon solar cells as the sensing elements provides the sensor with sufficient capability to withstand the hazards of a space environment. A method of arranging the cells in a sensor consists simply of mounting them at a large angle to the base. The use of an opaque shield placed between the cells and perpendicular to the base enhances the small-angle sensitivity while adding slightly to the bulk of the sensor. The difference in illumination of these cells as the result of an oblique incidence of the light rays from the reference source causes an electrical error signal which, when used in a battery-bridge circuit, requires a minimum of electrical processing for use in a space-vehicle orientation control system. An error which could occur after prolonged operation of the sensor is that resulting from asymmetrical aging of opposite cells. This could be periodically corrected with a balance potentiometer. A more routine error in the sensor is that produced by reflected earth radiation. This error may be eliminated over a large portion of the operation time by restricting the field of view and, consequently, the capture capability. A more sophisticated method of eliminating this error is to use separate sensors, for capture and fine pointing, along with a switching device. An experimental model has been constructed and tested to yield an output sensitivity of 1.2 millivolts per second of arc with a load resistance of 1,000 ohms and a reference light source of approximately 1,200 foot-candles delivered at the sensor.

  11. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  12. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  13. Full-scale Wind-tunnel to Determine a Satisfactory Location for a Service Pitot-static Tube on a Low-wing Monoplane

    NASA Technical Reports Server (NTRS)

    Parsons, John F

    1936-01-01

    Surveys of the air flow over the upper surface of four different airfoils were made in the full-scale wind tunnel to determine a satisfactory location for a fixed Pitot-static tube on a low-wing monoplane. The selection was based on small interference errors, less than 5 percent, and on a consideration of structural and ground handling problems. The most satisfactory location on the airfoils without flaps that were investigated was 10 percent of the chord aft and 25 percent of the chord above the trailing edge of a section approximately 40 percent of the semispan inboard of the wing tip. No satisfactory location was found near the wing when the flaps were deflected.

  14. Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Brown, James W.; Evans, Robert H.

    1988-01-01

    The radiance reflected from a plane-parallel atmosphere and flat sea surface in the absence of aerosols has been determined with an exact multiple scattering code to improve the analysis of Nimbus-7 CZCS imagery. It is shown that the single scattering approximation normally used to compute this radiance can result in errors of up to 5 percent for small and moderate solar zenith angles. A scheme to include the effect of variations in the surface pressure in the exact computation of the Rayleigh radiance is discussed. The results of an application of these computations to CZCS imagery suggest that accurate atmospheric corrections can be obtained for solar zenith angles at least as large as 65 deg.

  15. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  16. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit at the cost of reduced duty cycle. The error reduction allows the clinical target volume to planning target volume (CTV-PTV) margin to be reduced, leading to decreased normal-tissue toxicity and possible dose escalation. The CTV-PTV margin is also evaluated to quantify clinical benefits of EKF-GPRN+ prediction.

  17. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  18. The effect of toe marker placement error on joint kinematics and muscle forces using OpenSim gait simulation.

    PubMed

    Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong

    2015-01-01

    Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.

  19. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  20. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  1. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  2. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  3. Free vibration of multiwall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.; Ru, C. Q.; Mioduchowski, A.

    2005-06-01

    A multiple-elastic shell model is applied to systematically study free vibration of multiwall carbon nanotubes (MWNTs). Using Flugge [Stresses in Shells (Springer, Berlin, 1960)] equations of elastic shells, vibrational frequencies and associated modes are calculated for MWNTs of innermost radii 5 and 0.65 nm, respectively. The emphasis is placed on the effect of interlayer van der Waals (vdW) interaction on free vibration of MWNTs. Our results show that the interlayer vdW interaction has a crucial effect on radial (R) modes of large-radius MWNTs (e.g., of the innermost radius 5 nm), but is less pronounced for R modes of small-radius MWNTs (e.g., of the innermost radius 0.65 nm), and usually negligible for torsional (T) and longitudinal (L) modes of MWNTs. This is attributed to the fact that the interlayer vdW interaction, characterized by a radius-independent vdW interaction coefficient, depends on radial deflections only, and is dominant only for large-radius MWNTs of lower radial rigidity but less pronounced for small-radius MWNTs of much higher radial rigidity. As a result, the R modes of large-radius MWNTs are typically collective motions of almost all nested tubes, and the R modes of small-radius MWNTs, as well as the T and L modes of MWNTs, are basically vibrations of individual tubes. In particular, an approximate single-shell model is suggested to replace the multiple-shell model in calculating the lowest frequency of R mode of thin MWNTs (defined by the innermost radius-to-thickness ratio not less than 4) with relative errors less than 10%. In addition, the simplified Flugge single equation is adopted to substitute the exact Flugge equations in determining the R-mode frequencies of MWNTs with relative errors less than 10%.

  4. New, small, fast acting blood glucose meters--an analytical laboratory evaluation.

    PubMed

    Weitgasser, Raimund; Hofmann, Manuela; Gappmayer, Brigitta; Garstenauer, Christa

    2007-09-22

    Patients and medical personnel are eager to use blood glucose meters that are easy to handle and fast acting. We questioned whether accuracy and precision of these new, small and light weight devices would meet analytical laboratory standards and tested four meters with the above mentioned conditions. Approximately 300 capillary blood samples were collected and tested using two devices of each brand and two different types of glucose test strips. Blood from the same samples was used for comparison. Results were evaluated using maximum deviation of 5% and 10% from the comparative method, the error grid analysis, the overall deviation of the devices, the linear regression analysis as well as the CVs for measurement in series. Of all 1196 measurements a deviation of less than 5% resp. 10% from the reference method was found for the FreeStyle (FS) meter in 69.5% and 96%, the Glucocard X Meter (GX) in 44% and 75%, the One Touch Ultra (OT) in 29% and 60%, the Wellion True Track (WT) in 28.5% and 58%. The error grid analysis gave 99.7% for FS, 99% for GX, 98% for OT and 97% for WT in zone A. The remainder of the values lay within zone B. Linear regression analysis resembled these results. CVs for measurement in series showed higher deviations for OT and WT compared to FS and GX. The four new, small and fast acting glucose meters fulfil clinically relevant analytical laboratory requirements making them appropriate for use by medical personnel. However, with regard to the tight and restrictive limits of the ADA recommendations, the devices are still in need of improvement. This should be taken into account when the devices are used by primarily inexperienced persons and is relevant for further industrial development of such devices.

  5. A Monte Carlo Study on the Performance of a Corrected Formula for Epsilon Approximate Suggested by Lecoutre.

    ERIC Educational Resources Information Center

    Chen, Ru San; Dunlap, William P.

    1994-01-01

    The present simulation study confirms that the corrected epsilon approximate test of B. Lecoutre yields a less biased estimation of population epsilon and reduces Type I error rates when compared to the epsilon approximate test of H. Huynh and L. S. Feldt. (SLD)

  6. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  8. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  9. Local error estimates for adaptive simulation of the Reaction-Diffusion Master Equation via operator splitting.

    PubMed

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.

  10. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less

  11. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    PubMed

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.

  12. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  13. Nurses' Perceived Skills and Attitudes About Updated Safety Concepts: Impact on Medication Administration Errors and Practices.

    PubMed

    Armstrong, Gail E; Dietrich, Mary; Norman, Linda; Barnsteiner, Jane; Mion, Lorraine

    Approximately a quarter of medication errors in the hospital occur at the administration phase, which is solely under the purview of the bedside nurse. The purpose of this study was to assess bedside nurses' perceived skills and attitudes about updated safety concepts and examine their impact on medication administration errors and adherence to safe medication administration practices. Findings support the premise that medication administration errors result from an interplay among system-, unit-, and nurse-level factors.

  14. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  15. The potential for intelligent decision support systems to improve the quality and consistency of medication reviews.

    PubMed

    Bindoff, I; Stafford, A; Peterson, G; Kang, B H; Tenni, P

    2012-08-01

    Drug-related problems (DRPs) are of serious concern worldwide, particularly for the elderly who often take many medications simultaneously. Medication reviews have been demonstrated to improve medication usage, leading to reductions in DRPs and potential savings in healthcare costs. However, medication reviews are not always of a consistently high standard, and there is often room for improvement in the quality of their findings. Our aim was to produce computerized intelligent decision support software that can improve the consistency and quality of medication review reports, by helping to ensure that DRPs relevant to a patient are overlooked less frequently. A system that largely achieved this goal was previously published, but refinements have been made. This paper examines the results of both the earlier and newer systems. Two prototype multiple-classification ripple-down rules medication review systems were built, the second being a refinement of the first. Each of the systems was trained incrementally using a human medication review expert. The resultant knowledge bases were analysed and compared, showing factors such as accuracy, time taken to train, and potential errors avoided. The two systems performed well, achieving accuracies of approximately 80% and 90%, after being trained on only a small number of cases (126 and 244 cases, respectively). Through analysis of the available data, it was estimated that without the system intervening, the expert training the first prototype would have missed approximately 36% of potentially relevant DRPs, and the second 43%. However, the system appeared to prevent the majority of these potential expert errors by correctly identifying the DRPs for them, leaving only an estimated 8% error rate for the first expert and 4% for the second. These intelligent decision support systems have shown a clear potential to substantially improve the quality and consistency of medication reviews, which should in turn translate into improved medication usage if they were implemented into routine use. © 2011 Blackwell Publishing Ltd.

  16. Gestational age estimates from singleton births conceived using assisted reproductive technology.

    PubMed

    Callaghan, William M; Schieve, Laura A; Dietz, Patricia M

    2007-09-01

    Information on gestational age for public health research and surveillance in the US is usually obtained from vital records and is primarily based on the first day of the woman's last menstrual period (LMP). However, using LMP as a marker of conception is subject to a variety of errors and results in misclassification of gestational age. Pregnancies conceived through assisted reproductive technology (ART) are unique in that the estimates of gestational age are not based on the LMP, but on the date when fertilisation actually occurred, and thus most gestational age errors are likely to be due to errors introduced in recording and data entry. The purpose of this paper was to examine the birthweight distribution by gestational age for ART singleton livebirths reported to a national ART surveillance system. Gestational age was categorised as 20-27, 28-31, 32-36 and 37-44 weeks; birthweight distributions were plotted for each category. The distributions of very-low-birthweight (VLBW; <1500 g), moderately low-birthweight (1500-2499 g) and normal-birthweight infants for each gestational week were examined. At both 20-27 and 28-31 weeks, there was an extended right tail to the distribution and a small second mode. At 32-36 weeks, there were long tails in either direction and at 37-44 weeks, an extended tail to the left. There was a high proportion of VLBW infants at low gestational ages and a decreasing proportion of VLBW infants with increasing gestational age. However, there was also a fairly constant proportion of normal-birthweight infants at every gestational age below 34 weeks, which suggested misclassification of gestational age. Approximately 12% of ART births classified as 28-31 weeks' gestation had a birthweight in the second mode of the birthweight distribution compared with approximately 29% in national vital statistics data. Even when the birthweight and dates of conception and birth are known, questions remain regarding the residual amount of misclassification and the true nature of the birthweight distributions.

  17. Probabilistic Analysis of Pattern Formation in Monotonic Self-Assembly

    PubMed Central

    Moore, Tyler G.; Garzon, Max H.; Deaton, Russell J.

    2015-01-01

    Inspired by biological systems, self-assembly aims to construct complex structures. It functions through piece-wise, local interactions among component parts and has the potential to produce novel materials and devices at the nanoscale. Algorithmic self-assembly models the product of self-assembly as the output of some computational process, and attempts to control the process of assembly algorithmically. Though providing fundamental insights, these computational models have yet to fully account for the randomness that is inherent in experimental realizations, which tend to be based on trial and error methods. In order to develop a method of analysis that addresses experimental parameters, such as error and yield, this work focuses on the capability of assembly systems to produce a pre-determined set of target patterns, either accurately or perhaps only approximately. Self-assembly systems that assemble patterns that are similar to the targets in a significant percentage are “strong” assemblers. In addition, assemblers should predominantly produce target patterns, with a small percentage of errors or junk. These definitions approximate notions of yield and purity in chemistry and manufacturing. By combining these definitions, a criterion for efficient assembly is developed that can be used to compare the ability of different assembly systems to produce a given target set. Efficiency is a composite measure of the accuracy and purity of an assembler. Typical examples in algorithmic assembly are assessed in the context of these metrics. In addition to validating the method, they also provide some insight that might be used to guide experimentation. Finally, some general results are established that, for efficient assembly, imply that every target pattern is guaranteed to be assembled with a minimum common positive probability, regardless of its size, and that a trichotomy exists to characterize the global behavior of typical efficient, monotonic self-assembly systems in the literature. PMID:26421616

  18. Error estimation in the neural network solution of ordinary differential equations.

    PubMed

    Filici, Cristian

    2010-06-01

    In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented. Copyright 2010. Published by Elsevier Ltd.

  19. Comparison of base flows to selected streamflow statistics representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2012-01-01

    Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  1. Repeatability of standard metabolic rate (SMR) in a small fish, the spined loach (Cobitis taenia).

    PubMed

    Maciak, Sebastian; Konarzewski, Marek

    2010-10-01

    Significant repeatability of a trait of interest is an essential assumption for undertaking studies of phenotypic variability. It is especially important in studies on highly variable traits, such as metabolic rates. Recent publications suggest that resting/basal metabolic rate of homeotherms is repeatable across wide range of species. In contrast, studies on the consistency of standard metabolic rate (SMR) in ectotherms, particularly fish, are scarce. Here we present a comprehensive analysis of several important technical aspects of body mass-corrected SMR measurements and its repeatability in a small (average weight approximately 3g) fish, the spined loach (Cobitis taenia). First we demonstrated that release of oxygen from the walls of metabolic chambers exposed to hypoxic conditions did not confound SMR measurements. Next, using principle of propagation of measurement uncertainties we demonstrated that in aquatic systems, measurement error is significantly higher in open than closed respirometry setups. The measurement error for SMR of a small fish determined in a closed aquatic system is comparable to that obtainable using top-notch open-flow systems used for air-breathing terrestrial animals. Using a closed respirometer we demonstrated that body mass-corrected SMR in spined loaches was repeatable under both normoxia and hypoxia over a 5-month period (Pearson correlation r=0.68 and r=0.73, respectively) as well as across both conditions (intraclass correlation coefficient tau=0.30). In these analyses we accounted for possible effect of oxygen consumption of the oxygen electrode on repeatability of SMR. Significant SMR consistency was accompanied by significant repeatability of body mass (intraclass correlation coefficient tau=0.86). To our knowledge, this is the first study showing long-term repeatability of body mass and SMR in a small fish, and is consistent with the existence of heritable variation of these two traits. 2010 Elsevier Inc. All rights reserved.

  2. On basis set superposition error corrected stabilization energies for large n-body clusters.

    PubMed

    Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael

    2011-10-07

    In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics

  3. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  4. Solar and Magnetic Attitude Determination for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Woodham, Kurt; Blackman, Kathie; Sanneman, Paul

    1997-01-01

    During the Phase B development of the NASA New Millennium Program (NMP) Earth Orbiter-1 (EO-1) spacecraft, detailed analyses were performed for on-board attitude determination using the Sun and the Earth's magnetic field. This work utilized the TRMM 'Contingency Mode' as a starting point but concentrated on implementation for a small spacecraft without a high performance mechanical gyro package. The analyses and simulations performed demonstrate a geographic dependence due to diurnal variations in the Earth magnetic field with respect to the Sun synchronous, nearly polar orbit. Sensitivity to uncompensated residual magnetic fields of the spacecraft and field modeling errors is shown to be the most significant obstacle for maximizing performance. Performance has been evaluated with a number of inertial reference units and various mounting orientations for the two-axis Fine Sun Sensors. Attitude determination accuracy using the six state Kalman Filter executing at 2 Hz is approximately 0.2 deg, 3-sigma, per axis. Although EO-1 was subsequently driven to a stellar-based attitude determination system as a result of tighter pointing requirements, solar/magnetic attitude determination is demonstrated to be applicable to a range of small spacecraft with medium precision pointing requirements.

  5. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems.

    PubMed

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-21

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  6. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-01

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  7. Dealing with dietary measurement error in nutritional cohort studies.

    PubMed

    Freedman, Laurence S; Schatzkin, Arthur; Midthune, Douglas; Kipnis, Victor

    2011-07-20

    Dietary measurement error creates serious challenges to reliably discovering new diet-disease associations in nutritional cohort studies. Such error causes substantial underestimation of relative risks and reduction of statistical power for detecting associations. On the basis of data from the Observing Protein and Energy Nutrition Study, we recommend the following approaches to deal with these problems. Regarding data analysis of cohort studies using food-frequency questionnaires, we recommend 1) using energy adjustment for relative risk estimation; 2) reporting estimates adjusted for measurement error along with the usual relative risk estimates, whenever possible (this requires data from a relevant, preferably internal, validation study in which participants report intakes using both the main instrument and a more detailed reference instrument such as a 24-hour recall or multiple-day food record); 3) performing statistical adjustment of relative risks, based on such validation data, if they exist, using univariate (only for energy-adjusted intakes such as densities or residuals) or multivariate regression calibration. We note that whereas unadjusted relative risk estimates are biased toward the null value, statistical significance tests of unadjusted relative risk estimates are approximately valid. Regarding study design, we recommend increasing the sample size to remedy loss of power; however, it is important to understand that this will often be an incomplete solution because the attenuated signal may be too small to distinguish from unmeasured confounding in the model relating disease to reported intake. Future work should be devoted to alleviating the problem of signal attenuation, possibly through the use of improved self-report instruments or by combining dietary biomarkers with self-report instruments.

  8. Impact of spatial resolution on cirrus infrared satellite retrievals in the presence of cloud heterogeneity

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.

    2015-12-01

    Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.

  9. An intermediate orbit calculated from three position vectors: accuracy of approximation of a perturbed motion. (Russian Title: Промежуточная орбита, вычисленная по трем векторам положения: точность аппроксимации возмущенного движения)

    NASA Astrophysics Data System (ADS)

    Shefer, V. A.

    2015-12-01

    We examine intermediate perturbed orbit proposed by the author previously, defined from the three position vectors of a small celestial body. It is shown theoretically, that at a small reference time interval covering the body positions the approximation accuracy of real motion by this orbit corresponds approximately to the fourth order of tangency. The smaller reference interval of time, the better this correspondence. Laws of variation of the methodical errors in constructing intermediate orbit subject to the length of reference time interval are deduced. According to these laws, the convergence rate of the method to the exact solution (upon reducing the reference interval of time) in the general case is higher by three orders of magnitude than in the case of conventional methods using Keplerian unperturbed orbit. The considered orbit is among the most accurate in set of orbits of their class determined by the order of tangency. The theoretical results are validated by numerical examples. The work was supported by the Ministry of Education and Science of the Russian Federation, project no. 2014/223(1567).

  10. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  11. Design of compact freeform lens for application specific Light-Emitting Diode packaging.

    PubMed

    Wang, Kai; Chen, Fei; Liu, Zongyuan; Luo, Xiaobing; Liu, Sheng

    2010-01-18

    Application specific LED packaging (ASLP) is an emerging technology for high performance LED lighting. We introduced a practical design method of compact freeform lens for extended sources used in ASLP. A new ASLP for road lighting was successfully obtained by integrating a polycarbonate compact freeform lens of small form factor with traditional LED packaging. Optical performance of the ASLP was investigated by both numerical simulation based on Monte Carlo ray tracing method and experiments. Results demonstrated that, comparing with traditional LED module integrated with secondary optics, the ASLP had advantages of much smaller size in volume (approximately 1/8), higher system lumen efficiency (approximately 8.1%), lower cost and more convenience for customers to design and assembly, enabling possible much wider applications of LED for general road lighting. Tolerance analyses were also conducted. Installation errors of horizontal and vertical deviations had more effects on the shape and uniformity of radiation pattern compared with rotational deviation. The tolerances of horizontal, vertical and rotational deviations of this lens were 0.11 mm, 0.14 mm and 2.4 degrees respectively, which were acceptable in engineering.

  12. An Ultra-Low Power Charge Redistribution Successive Approximation Register A/D Converter for Biomedical Applications.

    PubMed

    Koppa, Santosh; Mohandesi, Manouchehr; John, Eugene

    2016-12-01

    Power consumption is one of the key design constraints in biomedical devices such as pacemakers that are powered by small non rechargeable batteries over their entire life time. In these systems, Analog to Digital Convertors (ADCs) are used as interface between analog world and digital domain and play a key role. In this paper we present the design of an 8-bit Charge Redistribution Successive Approximation Register (CR-SAR) analog to digital converter in standard TSMC 0.18μm CMOS technology for low power and low data rate devices such as pacemakers. The 8-bit optimized CR-SAR ADC achieves low power of less than 250nW with conversion rate of 1KB/s. This ADC achieves integral nonlinearity (INL) and differential nonlinearity (DNL) less than 0.22 least significant bit (LSB) and less than 0.04 LSB respectively as compared to the standard requirement for the INL and DNL errors to be less than 0.5 LSB. The designed ADC operates at 1V supply voltage converting input ranging from 0V to 250mV.

  13. Temperature scaling in a dense vibrofluidized granular material.

    PubMed

    Sunthar, P; Kumaran, V

    1999-08-01

    The leading order "temperature" of a dense two-dimensional granular material fluidized by external vibrations is determined. The grain interactions are characterized by inelastic collisions, but the coefficient of restitution is considered to be close to 1, so that the dissipation of energy during a collision is small compared to the average energy of a particle. An asymptotic solution is obtained where the particles are considered to be elastic in the leading approximation. The velocity distribution is a Maxwell-Boltzmann distribution in the leading approximation. The density profile is determined by solving the momentum balance equation in the vertical direction, where the relation between the pressure and density is provided by the virial equation of state. The temperature is determined by relating the source of energy due to the vibrating surface and the energy dissipation due to inelastic collisions. The predictions of the present analysis show good agreement with simulation results at higher densities where theories for a dilute vibrated granular material, with the pressure-density relation provided by the ideal gas law, are in error.

  14. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 1: Dynamic models and computer simulations for the ERBE nonscanner, scanner and solar monitor sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.

    1987-01-01

    Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.

  15. A-priori testing of sub-grid models for chemically reacting nonpremixed turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Jimenez, J.; Linan, A.; Rogers, M. M.; Higuera, F. J.

    1996-01-01

    The beta-assumed-pdf approximation of (Cook & Riley 1994) is tested as a subgrid model for the LES computation of nonpremixed turbulent reacting flows, in the limit of cold infinitely fast chemistry, for two plane turbulent mixing layers with different degrees of intermittency. Excellent results are obtained for the computation of integrals properties such as product mass fraction, and the model is applied to other quantities such as powers of the temperature and the pdf of the scalar itself. Even in these cases the errors are small enough to be useful in practical applications. The analysis is extended to slightly out of equilibrium problems such as the generation of radicals, and formulated in terms of the pdf of the scalar gradients. It is shown that the conditional gradient distribution is universal in a wide range of cases whose limits are established. Within those limits, engineering approximations to the radical concentration are also possible. It is argued that the experiments in this paper are essentially in the limit of infinite Reynolds number.

  16. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions

    NASA Technical Reports Server (NTRS)

    Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.; hide

    2008-01-01

    Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.

  17. A broadband fast multipole accelerated boundary element method for the three dimensional Helmholtz equation.

    PubMed

    Gumerov, Nail A; Duraiswami, Ramani

    2009-01-01

    The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations.

  18. Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.

    2004-01-01

    Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.

  19. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  20. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui

    2016-08-01

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.

  1. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less

  2. Systematic study of error sources in supersonic skin-friction balance measurements

    NASA Technical Reports Server (NTRS)

    Allen, J. M.

    1976-01-01

    An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.

  3. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach

    PubMed Central

    Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric

    2016-01-01

    Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927

  4. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  5. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  6. Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masjuan, Pere; Departamento de Fisica Teorica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E-18071 Granada

    2011-05-23

    Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.

  7. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  8. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  9. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  10. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919

  11. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.

  12. Quantum chemical approach for positron annihilation spectra of atoms and molecules beyond plane-wave approximation

    NASA Astrophysics Data System (ADS)

    Ikabata, Yasuhiro; Aiba, Risa; Iwanade, Toru; Nishizawa, Hiroaki; Wang, Feng; Nakai, Hiromi

    2018-05-01

    We report theoretical calculations of positron-electron annihilation spectra of noble gas atoms and small molecules using the nuclear orbital plus molecular orbital method. Instead of a nuclear wavefunction, the positronic wavefunction is obtained as the solution of the coupled Hartree-Fock or Kohn-Sham equation for a positron and the electrons. The molecular field is included in the positronic Fock operator, which allows an appropriate treatment of the positron-molecule repulsion. The present treatment succeeds in reproducing the Doppler shift, i.e., full width at half maximum (FWHM) of experimentally measured annihilation (γ-ray) spectra for molecules with a mean absolute error less than 10%. The numerical results indicate that the interpretation of the FWHM in terms of a specific molecular orbital is not appropriate.

  13. Calibration of the forward-scattering spectrometer probe - Modeling scattering from a multimode laser beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  14. Calibration of the Forward-scattering Spectrometer Probe: Modeling Scattering from a Multimode Laser Beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  15. Incomplete fuzzy data processing systems using artificial neural network

    NASA Technical Reports Server (NTRS)

    Patyra, Marek J.

    1992-01-01

    In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.

  16. Comparison of neutron spectra measured with three sizes of organic liquid scintillators using differentiation analysis

    NASA Technical Reports Server (NTRS)

    Shook, D. F.; Pierce, C. R.

    1972-01-01

    Proton recoil distributions were obtained by using organic liquid scintillators of different size. The measured distributions are converted to neutron spectra by differentiation analysis for comparison to the unfolded spectra of the largest scintillator. The approximations involved in the differentiation analysis are indicated to have small effects on the precision of neutron spectra measured with the smaller scintillators but introduce significant error for the largest scintillator. In the case of the smallest cylindrical scintillator, nominally 1.2 by 1.3 cm, the efficiency is shown to be insensitive to multiple scattering and to the angular distribution to the incident flux. These characteristics of the smaller scintillator make possible its use to measure scalar flux spectra within media high efficiency is not required.

  17. Terminal Sliding Mode-Based Consensus Tracking Control for Networked Uncertain Mechanical Systems on Digraphs.

    PubMed

    Chen, Gang; Song, Yongduan; Guan, Yanfeng

    2018-03-01

    This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.

  18. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  19. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  20. Ion beam machining error control and correction for small scale optics.

    PubMed

    Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi

    2011-09-20

    Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.

  1. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and radiation scattered back by internal scattering sites while the Fresnel reflection only accounts for surface reflections.

  2. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. A comparison of finite element and analytic models of acoustic scattering from rough poroelastic interfaces.

    PubMed

    Bonomo, Anthony L; Isakson, Marcia J; Chotiros, Nicholas P

    2015-04-01

    The finite element method is used to model acoustic scattering from rough poroelastic surfaces. Both monostatic and bistatic scattering strengths are calculated and compared with three analytic models: Perturbation theory, the Kirchhoff approximation, and the small-slope approximation. It is found that the small-slope approximation is in very close agreement with the finite element results for all cases studied and that perturbation theory and the Kirchhoff approximation can be considered valid in those instances where their predictions match those given by the small-slope approximation.

  4. Validity of mail survey data on bagged waterfowl

    USGS Publications Warehouse

    Atwood, E.L.

    1956-01-01

    Knowledge of the pattern of occurrence and characteristics of response errors obtained during an investigation of the validity of post-season surveys of hunters was used to advantage to devise a two-step method for removing the response-bias errors from the raw survey data. The method was tested on data with known errors and found to have a high efficiency in reducing the effect of response-bias errors. The development of this method for removing the effect of the response-bias errors, and its application to post-season hunter-take survey data, increased the reliability of the data from below the point of practical management significance up to the approximate reliability limits corresponding to the sampling errors.

  5. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  6. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  7. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  8. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  9. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  10. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Modeling Morphogenesis with Reaction-Diffusion Equations Using Galerkin Spectral Methods

    DTIC Science & Technology

    2002-05-06

    reaction- diffusion equation is a difficult problem in analysis that will not be addressed here. Errors will also arise from numerically approx solutions to...the ODEs. When comparing the approximate solution to actual reaction- diffusion systems found in nature, we must also take into account errors that...

  12. A theory of cerebellar cortex and adaptive motor control based on two types of universal function approximation capability.

    PubMed

    Fujita, Masahiko

    2016-03-01

    Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.

  14. Limitations of the paraxial Debye approximation.

    PubMed

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  15. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  16. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  17. Uncertainty of InSAR velocity fields for measuring long-wavelength displacement

    NASA Astrophysics Data System (ADS)

    Fattahi, H.; Amelung, F.

    2014-12-01

    Long-wavelength artifacts in InSAR data are the main limitation to measure long-wavelength displacement; they are traditionally attributed mainly to the inaccuracy of the satellite orbits (orbital errors). However, most satellites are precisely tracked resulting in uncertainties of orbits of 2-10 cm. Orbits of these satellites are thus precise enough to obtain precise velocity fields with uncertainties better than 1 mm/yr/100 km for older satellites (e.g. Envisat) and better than 0.2 mm/yr/100 km for modern satellites (e.g. TerraSAR-X and Sentinel-1) [Fattahi & Amelung, 2014]. Such accurate velocity fields are achievable if long-wavelength artifacts from sources other than orbital errors are identified and corrected for. We present a modified Small Baseline approach to measure long-wavelength deformation and evaluate the uncertainty of these measurements. We use a redundant network of interferograms for detection and correction of unwrapping errors to ensure the unbiased estimation of phase history. We distinguish between different sources of long-wavelength artifacts and correct those introduced by atmospheric delay, topographic residuals, timing errors, processing approximations and hardware issues. We evaluate the uncertainty of the velocity fields using a covariance matrix with the contributions from orbital errors and residual atmospheric delay. For contributions from the orbital errors we consider the standard deviation of velocity gradients in range and azimuth directions as a function of orbital uncertainty. For contributions from the residual atmospheric delay we use several approaches including the structure functions of InSAR time-series epochs, the predicted delay from numerical weather models and estimated wet delay from optical imagery. We validate this InSAR approach for measuring long-wavelength deformation by comparing InSAR velocity fields over ~500 km long swath across the southern San Andreas fault system with independent GPS velocities and examine the estimated uncertainties in several non-deforming areas. We show the efficiency of the approach to study the continental deformation across the Chaman fault system at the western Indian plate boundary. Ref: Fattahi, H., & Amelung, F., (2014), InSAR uncertainty due to orbital errors, Geophys, J. Int (in press).

  18. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  19. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  20. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  1. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  2. The added mass forces in insect flapping wings.

    PubMed

    Liu, Longgui; Sun, Mao

    2018-01-21

    The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. For numerical differentiation, dimensionality can be a blessing!

    NASA Astrophysics Data System (ADS)

    Anderssen, Robert S.; Hegland, Markus

    Finite difference methods, such as the mid-point rule, have been applied successfully to the numerical solution of ordinary and partial differential equations. If such formulas are applied to observational data, in order to determine derivatives, the results can be disastrous. The reason for this is that measurement errors, and even rounding errors in computer approximations, are strongly amplified in the differentiation process, especially if small step-sizes are chosen and higher derivatives are required. A number of authors have examined the use of various forms of averaging which allows the stable computation of low order derivatives from observational data. The size of the averaging set acts like a regularization parameter and has to be chosen as a function of the grid size h. In this paper, it is initially shown how first (and higher) order single-variate numerical differentiation of higher dimensional observational data can be stabilized with a reduced loss of accuracy than occurs for the corresponding differentiation of one-dimensional data. The result is then extended to the multivariate differentiation of higher dimensional data. The nature of the trade-off between convergence and stability is explicitly characterized, and the complexity of various implementations is examined.

  4. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    NASA Astrophysics Data System (ADS)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  5. Covariance analyses of satellite-derived mesoscale wind fields

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Vonder Haar, T. H.

    1979-01-01

    Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.

  6. Refined energetic ordering for sulphate-water (n = 3-6) clusters using high-level electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lambrecht, Daniel S.; McCaslin, Laura; Xantheas, Sotiris S.; Epifanovsky, Evgeny; Head-Gordon, Martin

    2012-10-01

    This work reports refinements of the energetic ordering of the known low-energy structures of sulphate-water clusters ? (n = 3-6) using high-level electronic structure methods. Coupled cluster singles and doubles with perturbative triples (CCSD(T)) is used in combination with an estimate of basis set effects up to the complete basis set limit using second-order Møller-Plesset theory. Harmonic zero-point energy (ZPE), included at the B3LYP/6-311 + + G(3df,3pd) level, was found to have a significant effect on the energetic ordering. In fact, we show that the energetic ordering is a result of a delicate balance between the electronic and vibrational energies. Limitations of the ZPE calculations, both due to electronic structure errors, and use of the harmonic approximation, probably constitute the largest remaining errors. Due to the often small energy differences between cluster isomers, and the significant role of ZPE, deuteration can alter the relative energies of low-lying structures, and, when it is applied in conjunction with calculated harmonic ZPEs, even alters the global minimum for n = 5. Experiments on deuterated clusters, as well as more sophisticated vibrational calculations, may therefore be quite interesting.

  7. Using cognitive status to predict crash risk: blazing new trails?

    PubMed

    Staplin, Loren; Gish, Kenneth W; Sifrit, Kathy J

    2014-02-01

    A computer-based version of an established neuropsychological paper-and-pencil assessment tool, the Trail-Making Test, was applied with approximately 700 drivers aged 70 years and older in offices of the Maryland Motor Vehicle Administration. This was a volunteer sample that received a small compensation for study participation, with an assurance that their license status would not be affected by the results. Analyses revealed that the study sample was representative of Maryland older drivers with respect to age and indices of prior driving safety. The relationship between drivers' scores on the Trail-Making Test and prospective crash experience was analyzed using a new outcome measure that explicitly takes into account error responses as well as correct responses, the error-compensated completion time. For the only reliable predictor of crash risk, Trail-Making Test Part B, this measure demonstrated a modest gain in specificity and was a more significant predictor of future safety risk than the simple time-to-completion measure. Improved specificity and the potential for autonomous test administration are particular advantages of this measure for use with large populations, in settings such as health care or driver licensing. © 2013.

  8. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions

    NASA Astrophysics Data System (ADS)

    Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-01

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  9. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions.

    PubMed

    Lorenzi, Juan M; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-28

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  10. Ground-based remote sensing of thin clouds in the Arctic

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.; Zhao, C.

    2012-11-01

    This paper describes a method for using interferometer measurements of downwelling thermal radiation to retrieve the properties of single-layer clouds. Cloud phase is determined from ratios of thermal emission in three "micro-windows" where absorption by water vapor is particularly small. Cloud microphysical and optical properties are retrieved from thermal emission in two micro-windows, constrained by the transmission through clouds of stratospheric ozone emission. Assuming a cloud does not approximate a blackbody, the estimated 95% confidence retrieval errors in effective radius, visible optical depth, number concentration, and water path are, respectively, 10%, 20%, 38% (55% for ice crystals), and 16%. Applied to data from the Atmospheric Radiation Measurement program (ARM) North Slope of Alaska - Adjacent Arctic Ocean (NSA-AAO) site near Barrow, Alaska, retrievals show general agreement with ground-based microwave radiometer measurements of liquid water path. Compared to other retrieval methods, advantages of this technique include its ability to characterize thin clouds year round, that water vapor is not a primary source of retrieval error, and that the retrievals of microphysical properties are only weakly sensitive to retrieved cloud phase. The primary limitation is the inapplicability to thicker clouds that radiate as blackbodies.

  11. Adaptive Control of Small Outboard-Powered Boats for Survey Applications

    NASA Technical Reports Server (NTRS)

    VanZwieten, T.S.; VanZwieten, J.H.; Fisher, A.D.

    2009-01-01

    Four autopilot controllers have been developed in this work that can both hold a desired heading and follow a straight line. These PID, adaptive PID, neuro-adaptive, and adaptive augmenting control algorithms have all been implemented into a numerical simulation of a 33-foot center console vessel with wind, waves, and current disturbances acting in the perpendicular (across-track) direction of the boat s desired trajectory. Each controller is tested for its ability to follow a desired heading in the presence of these disturbances and then to follow a straight line at two different throttle settings for the same disturbances. These controllers were tuned for an input thrust of 2000 N and all four controllers showed good performance with none of the controllers significantly outperforming the others when holding a constant heading and following a straight line at this engine thrust. Each controller was then tested for a reduced engine thrust of 1200 N per engine where each of the three adaptive controllers reduced heading error and across-track error by approximately 50% after a 300 second tuning period when compared to the fixed gain PID, showing that significant robustness to changes in throttle setting was gained by using an adaptive algorithm.

  12. Coherent control of molecular alignment of homonuclear diatomic molecules by analytically designed laser pulses.

    PubMed

    Zou, Shiyang; Sanz, Cristina; Balint-Kurti, Gabriel G

    2008-09-28

    We present an analytic scheme for designing laser pulses to manipulate the field-free molecular alignment of a homonuclear diatomic molecule. The scheme is based on the use of a generalized pulse-area theorem and makes use of pulses constructed around two-photon resonant frequencies. In the proposed scheme, the populations and relative phases of the rovibrational states of the molecule are independently controlled utilizing changes in the laser intensity and in the carrier-envelope phase difference, respectively. This allows us to create the correct coherent superposition of rovibrational states needed to achieve optimal molecular alignment. The validity and efficiency of the scheme are demonstrated by explicit application to the H(2) molecule. The analytically designed laser pulses are tested by exact numerical solutions of the time-dependent Schrodinger equation including laser-molecule interactions to all orders of the field strength. The design of a sequence of pulses to further enhance molecular alignment is also discussed and tested. It is found that the rotating wave approximation used in the analytic design of the laser pulses leads to small errors in the prediction of the relative phase of the rotational states. It is further shown how these errors may be easily corrected.

  13. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  14. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  15. A revised radiation package of G-packed McICA and two-stream approximation: Performance evaluation in a global weather forecasting model

    NASA Astrophysics Data System (ADS)

    Baek, Sunghye

    2017-07-01

    For more efficient and accurate computation of radiative flux, improvements have been achieved in two aspects, integration of the radiative transfer equation over space and angle. First, the treatment of the Monte Carlo-independent column approximation (MCICA) is modified focusing on efficiency using a reduced number of random samples ("G-packed") within a reconstructed and unified radiation package. The original McICA takes 20% of CPU time of radiation in the Global/Regional Integrated Model systems (GRIMs). The CPU time consumption of McICA is reduced by 70% without compromising accuracy. Second, parameterizations of shortwave two-stream approximations are revised to reduce errors with respect to the 16-stream discrete ordinate method. Delta-scaled two-stream approximation (TSA) is almost unanimously used in Global Circulation Model (GCM) but contains systematic errors which overestimate forward peak scattering as solar elevation decreases. These errors are alleviated by adjusting the parameterizations of each scattering element—aerosol, liquid, ice and snow cloud particles. Parameterizations are determined with 20,129 atmospheric columns of the GRIMs data and tested with 13,422 independent data columns. The result shows that the root-mean-square error (RMSE) over the all atmospheric layers is decreased by 39% on average without significant increase in computational time. Revised TSA developed and validated with a separate one-dimensional model is mounted on GRIMs for mid-term numerical weather forecasting. Monthly averaged global forecast skill scores are unchanged with revised TSA but the temperature at lower levels of the atmosphere (pressure ≥ 700 hPa) is slightly increased (< 0.5 K) with corrected atmospheric absorption.

  16. Uniform analytic approximation of Wigner rotation matrices

    NASA Astrophysics Data System (ADS)

    Hoffmann, Scott E.

    2018-02-01

    We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.

  17. Robust approximation-free prescribed performance control for nonlinear systems and its application

    NASA Astrophysics Data System (ADS)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  18. Online Recorded Data-Based Composite Neural Control of Strict-Feedback Systems With Application to Hypersonic Flight Dynamics.

    PubMed

    Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun

    2017-09-25

    This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.

  19. 3DHZETRN: Inhomogeneous Geometry Issues

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.

    2017-01-01

    Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.

  20. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

Top