Sample records for small numerical errors

  1. Prediction of matching condition for a microstrip subsystem using artificial neural network and adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim

    2016-11-01

    In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.

  2. Error Control with Perfectly Matched Layer or Damping Layer Treatments for Computational Aeroacoustics with Jet Flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2009-01-01

    In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  4. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  5. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  6. The influence of graphic display format on the interpretations of quantitative risk information among adults with lower education and literacy: a randomized experimental study.

    PubMed

    McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M

    2012-01-01

    To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).

  7. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  8. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  9. Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data

    NASA Astrophysics Data System (ADS)

    Gibbons, T. J.; Öztürk, E.; Sims, N. D.

    2018-01-01

    Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.

  10. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  11. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  12. Effects of Random Circuit Fabrication Errors on Small Signal Gain and on Output Phase In a Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.

    2011-10-01

    Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.

  13. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    NASA Astrophysics Data System (ADS)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.


  14. Hardware-Independent Proofs of Numerical Programs

    NASA Technical Reports Server (NTRS)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  15. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  16. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  17. Generation of a crowned pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.

    1988-01-01

    A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.

  18. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  19. QIKAIM, a fast seminumerical algorithm for the generation of minute-of-arc accuracy satellite predictions

    NASA Astrophysics Data System (ADS)

    Vermeer, M.

    1981-07-01

    A program was designed to replace AIMLASER for the generation of aiming predictions, to achieve a major saving in computing time, and to keep the program small enough for use even on small systems. An approach was adopted that incorporated the numerical integration of the orbit through a pass, limiting the computation of osculating elements to only one point per pass. The numerical integration method which is fourth order in delta t in the cumulative error after a given time lapse is presented. Algorithms are explained and a flowchart and listing of the program are provided.

  20. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    DOE PAGES

    Berman, G. P.; Kamenev, D. I.; Tsifrinovich, V. I.

    2003-01-01

    A dynmore » amics of a nuclear-spin quantum computer with a large number ( L = 1000 ) of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.« less

  1. Numerical modelling of multimode fibre-optic communication lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidelnikov, O S; Fedoruk, M P; Sygletos, S

    The results of numerical modelling of nonlinear propagation of an optical signal in multimode fibres with a small differential group delay are presented. It is found that the dependence of the error vector magnitude (EVM) on the differential group delay can be reduced by increasing the number of ADC samples per symbol in the numerical implementation of the differential group delay compensation algorithm in the receiver. The possibility of using multimode fibres with a small differential group delay for data transmission in modern digital communication systems is demonstrated. It is shown that with increasing number of modes the strong couplingmore » regime provides a lower EVM level than the weak coupling one. (fibre-optic communication lines)« less

  2. Solution of elastic-plastic stress analysis problems by the p-version of the finite element method

    NASA Technical Reports Server (NTRS)

    Szabo, Barna A.; Actis, Ricardo L.; Holzer, Stefan M.

    1993-01-01

    The solution of small strain elastic-plastic stress analysis problems by the p-version of the finite element method is discussed. The formulation is based on the deformation theory of plasticity and the displacement method. Practical realization of controlling discretization errors for elastic-plastic problems is the main focus. Numerical examples which include comparisons between the deformation and incremental theories of plasticity under tight control of discretization errors are presented.

  3. Small-Caliber Projectile Target Impact Angle Determined From Close Proximity Radiographs

    DTIC Science & Technology

    2006-10-01

    discrete motion data that can be numerically modeled using linear aerodynamic theory or 6-degrees-of- freedom equations of motion. The values of Fφ...Prediction Excel® Spreadsheet shown in figure 9. The Gamma at Impact Spreadsheet uses the linear aerodynamics model , equations 5 and 6, to calculate αT...trajectory angle error via consideration of the RMS fit errors of the actual firings. However, the linear aerodynamics model does not include this effect

  4. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  5. A numerical method for solving systems of linear ordinary differential equations with rapidly oscillating solutions

    NASA Technical Reports Server (NTRS)

    Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.

    1992-01-01

    The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.

  6. Validity of flowmeter data in heterogeneous alluvial aquifers

    NASA Astrophysics Data System (ADS)

    Bianchi, Marco

    2017-04-01

    Numerical simulations are performed to evaluate the impact of medium-scale sedimentary architecture and small-scale heterogeneity on the validity of the borehole flowmeter test, a widely used method for measuring hydraulic conductivity (K) at the scale required for detailed groundwater flow and solute transport simulations. Reference data from synthetic K fields representing the range of structures and small-scale heterogeneity typically observed in alluvial systems are compared with estimated values from numerical simulations of flowmeter tests. Systematic errors inherent in the flowmeter K estimates are significant when the reference K field structure deviates from the hypothetical perfectly stratified conceptual model at the basis of the interpretation method of flowmeter tests. Because of these errors, the true variability of the K field is underestimated and the distributions of the reference K data and log-transformed spatial increments are also misconstrued. The presented numerical analysis shows that the validity of flowmeter based K data depends on measureable parameters defining the architecture of the hydrofacies, the conductivity contrasts between the hydrofacies and the sub-facies-scale K variability. A preliminary geological characterization is therefore essential for evaluating the optimal approach for accurate K field characterization.

  7. Improvements to photometry. Part 1: Better estimation of derivatives in extinction and transformation equations

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.

    1988-01-01

    Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.

  8. Crowned spur gears - Methods for generation and Tooth Contact Analysis. II - Generation of the pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Handschuh, R. F.; Zhang, J.

    1988-01-01

    A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.

  9. Application Of Multi-grid Method On China Seas' Temperature Forecast

    NASA Astrophysics Data System (ADS)

    Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.

    2006-12-01

    Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.

  10. Quantum-state anomaly detection for arbitrary errors using a machine-learning technique

    NASA Astrophysics Data System (ADS)

    Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki

    2016-10-01

    The accurate detection of small deviations in given density matrice is important for quantum information processing, which is a difficult task because of the intrinsic fluctuation in density matrices reconstructed using a limited number of experiments. We previously proposed a method for decoherence error detection using a machine-learning technique [S. Hara, T. Ono, R. Okamoto, T. Washio, and S. Takeuchi, Phys. Rev. A 89, 022104 (2014), 10.1103/PhysRevA.89.022104]. However, the previous method is not valid when the errors are just changes in phase. Here, we propose a method that is valid for arbitrary errors in density matrices. The performance of the proposed method is verified using both numerical simulation data and real experimental data.

  11. Effect of Random Circuit Fabrication Errors on Small Signal Gain and Phase in Helix Traveling Wave Tubes

    NASA Astrophysics Data System (ADS)

    Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.

    2007-11-01

    Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.

  12. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  13. Simplified mathematics for customized refractive surgery.

    PubMed

    Preussner, Paul Rolf; Wahl, Jochen

    2003-03-01

    To describe a simple mathematical approach to customized corneal refractive surgery or customized intraocular lens (IOL) design that allows "hypervision" and to investigate the accuracy limits. University eye hospital, Mainz, Germany. Corneal shape and at least 1 IOL surface are approximated by the well-known Cartesian conic section curves (ellipsoid, paraboloid, or hyperboloid). They are characterized by only 2 parameters, the vertex radius and the numerical eccentricity. Residual refraction errors for this approximation are calculated by numerical ray tracing. These errors can be displayed as a 2-dimensional refraction map across the pupil or by blurring the image of a Landolt ring superimposed on the retinal receptor grid, giving an overall impression of the visual outcome. If the eye is made emmetropic for paraxial rays and if the numerical eccentricities of the cornea and lens are appropriately fitted to each other, the residual refractive errors are small enough to allow hypervision. Visual acuity of at least 2.0 (20/10) appears to be possible, particularly for mesopic pupil diameters. However, customized optics may have limited application due to their sensitivity to misalignment errors such as decentrations or rotations. The mathematical approach described by Descartes 350 years ago is adequate to calculate hypervision optics for the human eye. The availability of suitable mathematical tools should, however, not be viewed with too much optimism as long as the accuracy of the implementation in surgical procedures is limited.

  14. Event-triggered attitude control of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Baolin; Shen, Qiang; Cao, Xibin

    2018-02-01

    The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation.

  15. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  16. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  17. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.

  19. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  20. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  1. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  2. Causal impulse response for circular sources in viscous media

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2008-01-01

    The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018

  3. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  4. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  5. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  6. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  7. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  8. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  9. Small scale structure on cosmic strings

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1989-01-01

    The current understanding of cosmic string evolution is discussed, and the focus placed on the question of small scale structure on strings, where most of the disagreements lie. A physical picture designed to put the role of the small scale structure into more intuitive terms is presented. In this picture it can be seen how the small scale structure can feed back in a major way on the overall scaling solution. It is also argued that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small scale structure, which argued in any case would be extremely valuable in filling the gaps in the present understanding of cosmic string evolution.

  10. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  11. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  12. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  13. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  14. Weighing Rocky Exoplanets with Improved Radial Velocimetry

    NASA Astrophysics Data System (ADS)

    Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium

    2016-01-01

    The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.

  15. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2014-10-30

    to single photon detection, at comparable detection efficiencies. On the other hand, error-correction codes are better developed for small-alphabet...protocol is several orders of magnitude better than the Shapiro protocol, which needs entangled states. The bits/mode performance achieved by our...putting together a software tool implemented in MATLAB , which talks to the MODTRAN database via an intermediate numerical dump of transmission data

  16. Stochastic stability of sigma-point Unscented Predictive Filter.

    PubMed

    Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong

    2015-07-01

    In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  18. A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Reale, O.; Atlas, R.; Jusem, J. C.

    2004-01-01

    Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.

  19. Lagrangian numerical techniques for modelling multicomponent flow in the presence of large viscosity contrasts: Markers-in-bulk versus Markers-in-chain

    NASA Astrophysics Data System (ADS)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2015-04-01

    Many problems in geodynamic applications may be described as viscous flow of chemically heterogeneous materials. Examples include subduction of compositionally stratified lithospheric plates, folding of rheologically layered rocks, and thermochemical convection of the Earth's mantle. The associated time scales are significantly shorter than that of chemical diffusion, which justifies the commonly featured phenomena in geodynamic flow models termed contact discontinuities. These are spatially sharp interfaces separating regions of different material properties. Numerical modelling of advection of fields with sharp interfaces is challenging. Typical errors include numerical diffusion, which arises due to the repeated action of numerical interpolation. Mathematically, a material field can be represented by discrete indicator functions, whose values are interpreted as logical statements (e.g. whether or not the location is occupied by a given material). Interpolation of a discrete function boils down to determining where in the intermediate node-positions one material ends, and the other begins. The numerical diffusion error thus manifests itself as an erroneous location of the material-interface. Lagrangian advection-schemes are known to be less prone to numerical diffusion errors, compared to their Eulerian counterparts. The tracer-ratio method, where Lagrangian markers are used to discretize the bulk of materials filling the entire domain, is a popular example of such methods. The Stokes equation in this case is solved on a separate, static grid, and in order to do it - material properties must be interpolated from the markers to the grid. This involves the difficulty related to interpolation of discrete fields. The material distribution, and thus material-properties like viscosity and density, seen by the grid is polluted by the interpolation error, which enters the solution of the momentum equation. Errors due to the uncertainty of interface-location can be avoided when using interface tracking methods for advection. Marker-chain method is one such approach, where rather than discretizing the volume of each material, only their interface is discretized by a connected set of markers. Together with the boundary of the domain, the marker-chain constitutes closed polygon-boundaries which enclose the regions spanned by each material. Communicating material properties to the static grid can be done by determining which polygon each grid-node (or integration point) falls into, eliminating the need for interpolation. In our chosen implementation, an efficient parallelized algorithm for the point-in-polygon location is used, so this part of the code takes up only a small fraction of the CPU-time spent on each time step, and allows for spatial resolution of the compositional field beyond that which is practical with markers-in-bulk methods. An additional advantage of using marker-chains for material advection is that it offers a possibility to use some of its markers, or even edges, to generate a FEM grid. One can tailor a grid for obtaining a Stokes solution with optimal accuracy, while controlling the quality and size of its elements. Where geometry of the interface allows - element-edges may be aligned with it, which is known to significantly improve the quality of Stokes solution, compared to when the interface cuts through the elements (Moresi et al., 1996; Deubelbeiss and Kaus, 2008). In more geometrically complex interface-regions, the grid may simply be refined to reduce the error. As materials get deformed in the course of a simulation, the interface may get stretched and entangled. Addition of new markers along the chain may be required in order to properly resolve the increasingly complicated geometry. Conversely, some markers may be removed from regions where they get clustered. Such resampling of the interface requires additional computational effort (although small compared to other parts of the code), and introduces an error in the interface-location (similar to numerical diffusion). Our implementation of this procedure, which utilizes an auxiliary high-resolution structured grid, allows a high degree of control on the magnitude of this error, although cannot eliminate it completely. We will present our chosen numerical implementation of the markers-in-bulk and markers-in-chain methods outlined above, together with the simulation results of the especially designed benchmarks that demonstrate the relative successes and limitations of these methods.

  20. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  1. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  2. A divergence-cleaning scheme for cosmological SPMHD simulations

    NASA Astrophysics Data System (ADS)

    Stasyszyn, F. A.; Dolag, K.; Beck, A. M.

    2013-01-01

    In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (SPMHD) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from {nabla }\\cdot {boldsymbol B} related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. to give good results and prevent numerical artefacts from growing. Additionally, we demonstrate that certain current SPMHD implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in SPMHD at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical {nabla }\\cdot {boldsymbol B} errors at small scales.

  3. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  4. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  5. Improving Software Quality and Management Through Use of Service Level Agreements

    DTIC Science & Technology

    2005-03-01

    many who believe that the quality of the development process is the best predictor of software product quality. ( Fenton ) Repeatable software processes...reduced errors per KLOC for small projects ( Fenton ), and the quality management metric (QMM) (Machniak, Osmundson). There are also numerous IEEE 14...attention to cosmetic user interface issues and any problems that may arise with the prototype. (Sawyer) The validation process is also another check

  6. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  7. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  8. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  9. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  10. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  11. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  12. Running coupling constant from lattice studies of gluon and ghost propagators

    NASA Astrophysics Data System (ADS)

    Cucchieri, A.; Mendes, T.

    2004-12-01

    We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.

  13. Error analysis of satellite attitude determination using a vision-based approach

    NASA Astrophysics Data System (ADS)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  14. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  15. For numerical differentiation, dimensionality can be a blessing!

    NASA Astrophysics Data System (ADS)

    Anderssen, Robert S.; Hegland, Markus

    Finite difference methods, such as the mid-point rule, have been applied successfully to the numerical solution of ordinary and partial differential equations. If such formulas are applied to observational data, in order to determine derivatives, the results can be disastrous. The reason for this is that measurement errors, and even rounding errors in computer approximations, are strongly amplified in the differentiation process, especially if small step-sizes are chosen and higher derivatives are required. A number of authors have examined the use of various forms of averaging which allows the stable computation of low order derivatives from observational data. The size of the averaging set acts like a regularization parameter and has to be chosen as a function of the grid size h. In this paper, it is initially shown how first (and higher) order single-variate numerical differentiation of higher dimensional observational data can be stabilized with a reduced loss of accuracy than occurs for the corresponding differentiation of one-dimensional data. The result is then extended to the multivariate differentiation of higher dimensional data. The nature of the trade-off between convergence and stability is explicitly characterized, and the complexity of various implementations is examined.

  16. Effects of waveform model systematics on the interpretation of GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  17. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  18. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  19. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  20. Linear motor drive system for continuous-path closed-loop position control of an object

    DOEpatents

    Barkman, William E.

    1980-01-01

    A precision numerical controlled servo-positioning system is provided for continuous closed-loop position control of a machine slide or platform driven by a linear-induction motor. The system utilizes filtered velocity feedback to provide system stability required to operate with a system gain of 100 inches/minute/0.001 inch of following error. The filtered velocity feedback signal is derived from the position output signals of a laser interferometer utilized to monitor the movement of the slide. Air-bearing slides mounted to a stable support are utilized to minimize friction and small irregularities in the slideway which would tend to introduce positioning errors. A microprocessor is programmed to read command and feedback information and converts this information into the system following error signal. This error signal is summed with the negative filtered velocity feedback signal at the input of a servo amplifier whose output serves as the drive power signal to the linear motor position control coil.

  1. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  2. A novel approach to evaluation of pest insect abundance in the presence of noise.

    PubMed

    Embleton, Nina; Petrovskaya, Natalia

    2014-03-01

    Evaluation of pest abundance is an important task of integrated pest management. It has recently been shown that evaluation of pest population size from discrete sampling data can be done by using the ideas of numerical integration. Numerical integration of the pest population density function is a computational technique that readily gives us an estimate of the pest population size, where the accuracy of the estimate depends on the number of traps installed in the agricultural field to collect the data. However, in a standard mathematical problem of numerical integration, it is assumed that the data are precise, so that the random error is zero when the data are collected. This assumption does not hold in ecological applications. An inherent random error is often present in field measurements, and therefore it may strongly affect the accuracy of evaluation. In our paper, we offer a novel approach to evaluate the pest insect population size under the assumption that the data about the pest population include a random error. The evaluation is not based on statistical methods but is done using a spatially discrete method of numerical integration where the data obtained by trapping as in pest insect monitoring are converted to values of the population density. It will be discussed in the paper how the accuracy of evaluation differs from the case where the same evaluation method is employed to handle precise data. We also consider how the accuracy of the pest insect abundance evaluation can be affected by noise when the data available from trapping are sparse. In particular, we show that, contrary to intuitive expectations, noise does not have any considerable impact on the accuracy of evaluation when the number of traps is small as is conventional in ecological applications.

  3. Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.

    PubMed

    Wu, Xue; Wu, Changxu

    2017-01-01

    This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.

  4. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  5. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  6. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  7. High-speed photogrammetry system for measuring the kinematics of insect wings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Iain D.; Lawson, Nicholas J.; Harvey, Andrew R.

    2006-06-10

    We describe and characterize an experimental system to perform shape measurements on deformable objects using high-speed close-range photogrammetry. The eventual application is to extract the kinematics of several marked points on an insect wing during tethered and hovering flight. We investigate the performance of the system with a small number of views and determine an empirical relation between the mean pixel error of the optimization routine and the position error. Velocity and acceleration are calculated by numerical differencing, and their relation to the position errors is verified. For a field of view of {approx}40mmx40 mm, a rms accuracy of 30more » {mu}m in position, 150 mm/s in velocity, and 750 m/s2 in acceleration at 5000 frames/s is achieved. This accuracy is sufficient to measure the kinematics of hoverfly flight.« less

  8. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  9. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  10. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  11. Terminal iterative learning control based station stop control of a train

    NASA Astrophysics Data System (ADS)

    Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao

    2011-07-01

    The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.

  12. An equilibrium-preserving discretization for the nonlinear Rosenbluth-Fokker-Planck operator in arbitrary multi-dimensional geometry

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Chacón, L.; Simakov, A. N.

    2017-06-01

    The Fokker-Planck collision operator is an advection-diffusion operator which describe dynamical systems such as weakly coupled plasmas [1,2], photonics in high temperature environment [3,4], biological [5], and even social systems [6]. For plasmas in the continuum, the Fokker-Planck collision operator supports such important physical properties as conservation of number, momentum, and energy, as well as positivity. It also obeys the Boltzmann's H-theorem [7-11], i.e., the operator increases the system entropy while simultaneously driving the distribution function towards a Maxwellian. In the discrete, when these properties are not ensured, numerical simulations can either fail catastrophically or suffer from significant numerical pollution [12,13]. There is strong emphasis in the literature on developing numerical techniques to solve the Fokker-Planck equation while preserving these properties [12-24]. In this short note, we focus on the analytical equilibrium preserving property, meaning that the Fokker-Planck collision operator vanishes when acting on an analytical Maxwellian distribution function. The equilibrium preservation property is especially important, for example, when one is attempting to capture subtle transport physics. Since transport arises from small O (ɛ) corrections to the equilibrium [25] (where ɛ is a small expansion parameter), numerical truncation error present in the equilibrium solution may dominate, overwhelming transport dynamics.

  13. Quantifying errors in trace species transport modeling.

    PubMed

    Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M

    2008-12-16

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.

  14. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  15. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  16. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Vezewski, D. J.

    1980-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  17. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  18. Effects of stinger axial dynamics and mass compensation methods on experimental modal analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ximing

    1992-06-01

    A longitudinal bar model that includes both stinger elastic and inertia properties is used to analyze the stinger's axial dynamics as well as the mass compensation that is required to obtain accurate input forces when a stinger is installed between the excitation source, force transducer, and the structure under test. Stinger motion transmissibility and force transmissibility, axial resonance and excitation energy transfer problems are discussed in detail. Stinger mass compensation problems occur when the force transducer is mounted on the exciter end of the stinger. These problems are studied theoretically, numerically, and experimentally. It is found that the measured Frequency Response Function (FRF) can be underestimated if mass compensation is based on the stinger exciter-end acceleration and can be overestimated if the mass compensation is based on the structure-end acceleration due to the stinger's compliance. A new mass compensation method that is based on two accelerations is introduced and is seen to improve the accuracy considerably. The effects of the force transducer's compliance on the mass compensation are also discussed. A theoretical model is developed that describes the measurement system's FRD around a test structure's resonance. The model shows that very large measurement errors occur when there is a small relative phase shift between the force and acceleration measurements. These errors can be in hundreds of percent corresponding to a phase error on the order of one or two degrees. The physical reasons for this unexpected error pattern are explained. This error is currently unknown to the experimental modal analysis community. Two sample structures consisting of a rigid mass and a double cantilever beam are used in the numerical calculations and experiments.

  19. Numerical simulations to assess the tracer dilution method for measurement of landfill methane emissions.

    PubMed

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2016-10-01

    Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Comparison of vertical discretization techniques in finite-difference models of ground-water flow; example from a hypothetical New England setting

    USGS Publications Warehouse

    Harte, Philip T.

    1994-01-01

    Proper discretization of a ground-water-flow field is necessary for the accurate simulation of ground-water flow by models. Although discretiza- tion guidelines are available to ensure numerical stability, current guidelines arc flexible enough (particularly in vertical discretization) to allow for some ambiguity of model results. Testing of two common types of vertical-discretization schemes (horizontal and nonhorizontal-model-layer approach) were done to simulate sloping hydrogeologic units characteristic of New England. Differences of results of model simulations using these two approaches are small. Numerical errors associated with use of nonhorizontal model layers are small (4 percent). even though this discretization technique does not adhere to the strict formulation of the finite-difference method. It was concluded that vertical discretization by means of the nonhorizontal layer approach has advantages in representing the hydrogeologic units tested and in simplicity of model-data input. In addition, vertical distortion of model cells by this approach may improve the representation of shallow flow processes.

  1. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  2. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  3. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  4. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    NASA Astrophysics Data System (ADS)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  5. On the error propagation of semi-Lagrange and Fourier methods for advection problems☆

    PubMed Central

    Einkemmer, Lukas; Ostermann, Alexander

    2015-01-01

    In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018

  6. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  7. Coupling a Reactive Transport Code with a Global Land Surface Model for Mechanistic Biogeochemistry Representation: 1. Addressing the Challenge of Nonnegativity

    DOE PAGES

    Tang, Guoping; Yuan, Fengming; Bisht, Gautam; ...

    2016-01-01

    Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each reactant with a Monod substrate limiting function provides a smooth transition between a zero-order rate when the reactant is abundant and first-order rate when the reactant becomes limiting. When the half saturation is small, marching through the transition may require small time step sizes to resolve the sharp change within a small range of concentration values. Our results from simple tests and CLM-PFLOTRAN simulations caution against use of SU and indicate that accurate, stable, and relatively efficient solutions can be achieved with LT and downregulation with Monod substrate limiting function and residual concentration.« less

  8. Deployment and evaluation of a dual-sensor autofocusing method for on-machine measurement of patterns of small holes on freeform surfaces.

    PubMed

    Chen, Xiaomei; Longstaff, Andrew; Fletcher, Simon; Myers, Alan

    2014-04-01

    This paper presents and evaluates an active dual-sensor autofocusing system that combines an optical vision sensor and a tactile probe for autofocusing on arrays of small holes on freeform surfaces. The system has been tested on a two-axis test rig and then integrated onto a three-axis computer numerical control (CNC) milling machine, where the aim is to rapidly and controllably measure the hole position errors while the part is still on the machine. The principle of operation is for the tactile probe to locate the nominal positions of holes, and the optical vision sensor follows to focus and capture the images of the holes. The images are then processed to provide hole position measurement. In this paper, the autofocusing deviations are analyzed. First, the deviations caused by the geometric errors of the axes on which the dual-sensor unit is deployed are estimated to be 11 μm when deployed on a test rig and 7 μm on the CNC machine tool. Subsequently, the autofocusing deviations caused by the interaction of the tactile probe, surface, and small hole are mathematically analyzed and evaluated. The deviations are a result of the tactile probe radius, the curvatures at the positions where small holes are drilled on the freeform surface, and the effect of the position error of the hole on focusing. An example case study is provided for the measurement of a pattern of small holes on an elliptical cylinder on the two machines. The absolute sum of the autofocusing deviations is 118 μm on the test rig and 144 μm on the machine tool. This is much less than the 500 μm depth of field of the optical microscope. Therefore, the method is capable of capturing a group of clear images of the small holes on this workpiece for either implementation.

  9. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  10. Beam collimation and focusing and error analysis of LD and fiber coupling system based on ZEMAX

    NASA Astrophysics Data System (ADS)

    Qiao, Lvlin; Zhou, Dejian; Xiao, Lei

    2017-10-01

    Laser diodde has many advantages, such as high efficiency, small volume, low cost and easy integration, so it is widely used. Because of its poor beam quality, the application of semiconductor laser has also been seriously hampered. In view of the poor beam quality, the ZEMAX optical design software is used to simulate the far field characteristics of the semiconductor laser beam, and the coupling module of the semiconductor laser and the optical fiber is designed and optimized. And the beam is coupled into the fiber core diameter d=200µm, the numerical aperture NA=0.22 optical fiber, the output power can reach 95%. Finally, the influence of the three docking errors on the coupling efficiency during the installation process is analyzed.

  11. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  13. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  14. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  15. Robust preview control for a class of uncertain discrete-time systems with time-varying delay.

    PubMed

    Li, Li; Liao, Fucheng

    2018-02-01

    This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  17. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  18. Numerical Experiments in Error Control for Sound Propagation Using a Damping Layer Boundary Treatment

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2017-01-01

    This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].

  19. A method to map errors in the deformable registration of 4DCT images1

    PubMed Central

    Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.

    2010-01-01

    Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288

  20. Stochastic Evolution Equations Driven by Fractional Noises

    DTIC Science & Technology

    2016-11-28

    rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian

  1. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  2. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  3. Chamber measurement of surface-atmosphere trace gas exchange: Numerical evaluation of dependence on soil, interfacial layer, and source/sink properties

    NASA Astrophysics Data System (ADS)

    Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.

    2000-04-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  4. Chamber measurement of surface-atmosphere trace gas exchange--Numerical evaluation of dependence on soil interfacial layer, and source/sink products

    USGS Publications Warehouse

    Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.

    2000-01-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  5. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  6. Asymptotic formulae for flow in superhydrophobic channels with longitudinal ridges and protruding menisci

    NASA Astrophysics Data System (ADS)

    Kirk, Toby L.

    2018-03-01

    This paper presents new analytical formulae for flow in a channel with one or both walls patterned with a longitudinal array of ridges and arbitrarily protruding menisci. Derived from a matched asymptotic expansion, they extend results by Crowdy (J. Fluid Mech., vol. 791, 2016, R7) for shear flow, and thus make no restriction on the protrusion into or out of the liquid. The slip length formula is compared against full numerical solutions and, despite the assumption of small ridge period in its derivation, is found to have a very large range of validity; relative errors are small even for periods large enough for the protruding menisci to degrade the flow and touch the opposing wall.

  7. Faster and More Accurate Transport Procedures for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.

    2010-01-01

    Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  8. Analysis of the spectral vanishing viscosity method for periodic conservation laws

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Tadmor, Eitan

    1988-01-01

    The convergence of the spectral vanishing method for both the spectral and pseudospectral discretizations of the inviscid Burgers' equation is analyzed. It is proven that this kind of vanishing viscosity is responsible for a spectral decay of those Fourier coefficients located toward the end of the computed spectrum; consequently, the discretization error is shown to be spectrally small independent of whether the underlying solution is smooth or not. This in turn implies that the numerical solution remains uniformly bounded and convergence follows by compensated compactness arguments.

  9. How genetic errors in GPCRs affect their function: Possible therapeutic strategies

    PubMed Central

    Stoy, Henriette; Gurevich, Vsevolod V.

    2015-01-01

    Activating and inactivating mutations in numerous human G protein-coupled receptors (GPCRs) are associated with a wide range of disease phenotypes. Here we use several class A GPCRs with a particularly large set of identified disease-associated mutations, many of which were biochemically characterized, along with known GPCR structures and current models of GPCR activation, to understand the molecular mechanisms yielding pathological phenotypes. Based on this mechanistic understanding we also propose different therapeutic approaches, both conventional, using small molecule ligands, and novel, involving gene therapy. PMID:26229975

  10. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  11. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  12. Dynamic analysis and numerical experiments for balancing of the continuous single-disc and single-span rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi

    2017-03-01

    Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.

  13. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  14. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  15. Modeling Bloch oscillations in ultra-small Josephson junctions

    NASA Astrophysics Data System (ADS)

    Vora, Heli; Kautz, Richard; Nam, Sae Woo; Aumentado, Jose

    In a seminal paper, Likharev et al. developed a theory for ultra-small Josephson junctions with Josephson coupling energy (Ej) less than the charging energy (Ec) and showed that such junctions demonstrate Bloch oscillations which could be used to make a fundamental current standard that is a dual of the Josephson volt standard. Here, based on the model of Geigenmüller and Schön, we numerically calculate the current-voltage relationship of such an ultra-small junction which includes various error processes present in a nanoscale Josephson junction such as random quasiparticle tunneling events and Zener tunneling between bands. This model allows us to explore the parameter space to see the effect of each process on the width and height of the Bloch step and serves as a guide to determine whether it is possible to build a quantum current standard of a metrological precision using Bloch oscillations.

  16. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  17. Numerical artifacts in the Generalized Porous Medium Equation: Why harmonic averaging itself is not to blame

    NASA Astrophysics Data System (ADS)

    Maddix, Danielle C.; Sampaio, Luiz; Gerritsen, Margot

    2018-05-01

    The degenerate parabolic Generalized Porous Medium Equation (GPME) poses numerical challenges due to self-sharpening and its sharp corner solutions. For these problems, we show results for two subclasses of the GPME with differentiable k (p) with respect to p, namely the Porous Medium Equation (PME) and the superslow diffusion equation. Spurious temporal oscillations, and nonphysical locking and lagging have been reported in the literature. These issues have been attributed to harmonic averaging of the coefficient k (p) for small p, and arithmetic averaging has been suggested as an alternative. We show that harmonic averaging is not solely responsible and that an improved discretization can mitigate these issues. Here, we investigate the causes of these numerical artifacts using modified equation analysis. The modified equation framework can be used for any type of discretization. We show results for the second order finite volume method. The observed problems with harmonic averaging can be traced to two leading error terms in its modified equation. This is also illustrated numerically through a Modified Harmonic Method (MHM) that can locally modify the critical terms to remove the aforementioned numerical artifacts.

  18. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  19. Mathematical and field analysis of longitudinal reservoir infill

    NASA Astrophysics Data System (ADS)

    Ke, W. T.; Capart, H.

    2016-12-01

    In reservoirs, severe problems are caused by infilled sediment deposits. In long term, the sediment accumulation reduces the capacity of reservoir storage and flood control benefits. In the short term, the sediment deposits influence the intakes of water-supply and hydroelectricity generation. For the management of reservoir, it is important to understand the deposition process and then to predict the sedimentation in reservoir. To investigate the behaviors of sediment deposits, we propose a one-dimensional simplified theory derived by the Exner equation to predict the longitudinal sedimentation distribution in idealized reservoirs. The theory models the reservoir infill geomorphic actions for three scenarios: delta progradation, near-dam bottom deposition, and final infill. These yield three kinds of self-similar analytical solutions for the reservoir bed profiles, under different boundary conditions. Three analytical solutions are composed by error function, complementary error function, and imaginary error function, respectively. The theory is also computed by finite volume method to test the analytical solutions. The theoretical and numerical predictions are in good agreement with one-dimensional small-scale laboratory experiment. As the theory is simple to apply with analytical solutions and numerical computation, we propose some applications to simulate the long-profile evolution of field reservoirs and focus on the infill sediment deposit volume resulting the uplift of near-dam bottom elevation. These field reservoirs introduced here are Wushe Reservoir, Tsengwen Reservoir, Mudan Reservoir in Taiwan, Lago Dos Bocas in Puerto Rico, and Sakuma Dam in Japan.

  20. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  1. Comparison of numerical predictions of horizontal nonisothermal jet in a room with three turbulence models -- {kappa}-{epsilon} EVM, ASM, and DSM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, Shuzo; Kato, Shinsuke; Ooka, Ryozo

    1994-12-31

    A three-dimensional nonisothermal jet in a room is analyzed numerically by the standard {kappa}-{epsilon} eddy viscosity model (EVM) and two second-moment closure models-the algebraic stress model (ASM) (Hossain and Rodi 1982) and the differential stress model (DSM) (Launder et al. 1975). Numerical results given by these turbulence models are compared with experimental results, and the prediction errors existing in the results are examined, thus clarifying the relative structural differences between the {kappa}-{epsilon} EVM and the second-moment closure models. Since the second moment closure models clearly manifest the turbulence structures of the flow field, they are more accurate than the {kappa}-{epsilon}more » EVM. A small difference between the DSM and the ASM -- one based on an inappropriate approximation of the convection and diffusion terms in the Reynolds stress transport equations in the ASM -- is also observed.« less

  2. Galaxy Strategy for Ligo-Virgo Gravitational Wave Counterpart Searches

    NASA Technical Reports Server (NTRS)

    Gehrels, Neil; Cannizzo, John K.; Kanner, Jonah; Kasliwal, Mansi M.; Nissanke, Samaya; Singer, Leo P.

    2016-01-01

    In this work we continue a line of inquiry begun in Kanner et al. which detailed a strategy for utilizing telescopes with narrow fields of view, such as the Swift X-Ray Telescope (XRT), to localize gravity wave (GW) triggers from LIGO (Laser Interferometer Gravitational-Wave Observatory) / Virgo. If one considers the brightest galaxies that produce 50 percent of the light, then the number of galaxies inside typical GW error boxes will be several tens. We have found that this result applies both in the early years of Advanced LIGO when the range is small and the error boxes large, and in the later years when the error boxes will be small and the range large. This strategy has the beneficial property of reducing the number of telescope pointings by a factor 10 to 100 compared with tiling the entire error box. Additional galaxy count reduction will come from a GW rapid distance estimate which will restrict the radial slice in search volume. Combining the bright galaxy strategy with a convolution based on anticipated GW localizations, we find that the searches can be restricted to about 18 plus or minus 5 galaxies for 2015, about 23 plus or minus 4 for 2017, and about 11 plus or minus for 2020. This assumes a distance localization at the putative neutron star-neutron star (NS-NS) merger range mu for each target year, and these totals are integrated out to the range. Integrating out to the horizon would roughly double the totals. For localizations with r (rotation) greatly less than mu the totals would decrease. The galaxy strategy we present in this work will enable numerous sensitive optical and X-ray telescopes with small fields of view to participate meaningfully in searches wherein the prospects for rapidly fading afterglow place a premium on a fast response time.

  3. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  4. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    NASA Technical Reports Server (NTRS)

    Baker, Gregory; Siegel, Michael; Tanveer, Saleh

    1995-01-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.

  5. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Numerical and experimental study of expiratory flow in the case of major upper airway obstructions with fluid structure interaction

    NASA Astrophysics Data System (ADS)

    Chouly, F.; van Hirtum, A.; Lagrée, P.-Y.; Pelorson, X.; Payan, Y.

    2008-02-01

    This study deals with the numerical prediction and experimental description of the flow-induced deformation in a rapidly convergent divergent geometry which stands for a simplified tongue, in interaction with an expiratory airflow. An original in vitro experimental model is proposed, which allows measurement of the deformation of the artificial tongue, in condition of major initial airway obstruction. The experimental model accounts for asymmetries in geometry and tissue properties which are two major physiological upper airway characteristics. The numerical method for prediction of the fluid structure interaction is described. The theory of linear elasticity in small deformations has been chosen to compute the mechanical behaviour of the tongue. The main features of the flow are taken into account using a boundary layer theory. The overall numerical method entails finite element solving of the solid problem and finite differences solving of the fluid problem. First, the numerical method predicts the deformation of the tongue with an overall error of the order of 20%, which can be seen as a preliminary successful validation of the theory and simulations. Moreover, expiratory flow limitation is predicted in this configuration. As a result, both the physical and numerical models could be useful to understand this phenomenon reported in heavy snorers and apneic patients during sleep.

  7. CFD simulation of pulsation noise in a small centrifugal compressor with volute and resonance tube

    NASA Astrophysics Data System (ADS)

    Wakaki, Daich; Sakuka, Yuta; Inokuchi, Yuzo; Ueda, Kosuke; Yamasaki, Nobuhiko; Yamagata, Akihiro

    2015-02-01

    The rotational frequency tone noise emitted from the automobile turbocharger is called the pulsation noise. The cause of the pulsation noise is not fully understood, but is considered to be due to some manufacturing errors, which is called the mistuning. The effects of the mistuning of the impeller blade on the noise field inside the flow passage of the compressor are numerically investigated. Here, the flow passage includes the volute and duct located downstream of the compressor impeller. Our numerical approach is found to successfully capture the wavelength of the pulsation noise at given rotational speeds by the comparison with the experiments. One of the significant findings is that the noise field of the pulsation noise in the duct is highly one-dimensional although the flow fields are highly three-dimensional.

  8. Numerical marching techniques for fluid flows with heat transfer

    NASA Technical Reports Server (NTRS)

    Hornbeck, R. W.

    1973-01-01

    The finite difference formulation and method of solution is presented for a wide variety of fluid flow problems with associated heat transfer. Only a few direct results from these formulations are given as examples, since the book is intended primarily to serve a discussion of the techniques and as a starting point for further investigations; however, the formulations are sufficiently complete that a workable computer program may be written from them. In the appendixes a number of topics are discussed which are of interest with respect to the finite difference equations presented. These include a very rapid method for solving certain sets of linear algebraic equations, a discussion of numerical stability, the inherent error in flow rate for confined flow problems, and a method for obtaining high accuracy with a relatively small number of mesh points.

  9. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  10. Computational investigations and grid refinement study of 3D transient flow in a cylindrical tank using OpenFOAM

    NASA Astrophysics Data System (ADS)

    Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.

    2016-10-01

    The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.

  11. Interferometric correction system for a numerically controlled machine

    DOEpatents

    Burleson, Robert R.

    1978-01-01

    An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.

  12. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  13. Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states

    NASA Astrophysics Data System (ADS)

    de Léséleuc, Sylvain; Barredo, Daniel; Lienhard, Vincent; Browaeys, Antoine; Lahaye, Thierry

    2018-05-01

    We study experimentally various physical limitations and technical imperfections that lead to damping and finite contrast of optically driven Rabi oscillations between ground and Rydberg states of a single atom. Finite contrast is due to preparation and detection errors, and we show how to model and measure them accurately. Part of these errors originates from the finite lifetime of Rydberg states, and we observe its n3 scaling with the principal quantum number n . To explain the damping of Rabi oscillations, we use simple numerical models taking into account independently measured experimental imperfections and show that the observed damping actually results from the accumulation of several small effects, each at the level of a few percent. We discuss prospects for improving the coherence of ground-Rydberg Rabi oscillations in view of applications in quantum simulation and quantum information processing with arrays of single Rydberg atoms.

  14. Theoretical study on the laser-driven ion-beam trace probe in toroidal devices with large poloidal magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, X.; Xiao, C.; Chen, Y.; Xu, T.; Yu, Y.; Xu, M.; Wang, L.; Wang, X.; Lin, C.

    2018-03-01

    Recently, a new diagnostic method, Laser-driven Ion-beam Trace Probe (LITP), has been proposed to reconstruct 2D profiles of the poloidal magnetic field (Bp) and radial electric field (Er) in the tokamak devices. A linear assumption and test particle model were used in those reconstructions. In some toroidal devices such as the spherical tokamak and the Reversal Field Pinch (RFP), Bp is not small enough to meet the linear assumption. In those cases, the error of reconstruction increases quickly when Bp is larger than 10% of the toroidal magnetic field (Bt), and the previous test particle model may cause large error in the tomography process. Here a nonlinear reconstruction method is proposed for those cases. Preliminary numerical results show that LITP could be applied not only in tokamak devices, but also in other toroidal devices, such as the spherical tokamak, RFP, etc.

  15. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  16. Entropic Barriers for Two-Dimensional Quantum Memories

    NASA Astrophysics Data System (ADS)

    Brown, Benjamin J.; Al-Shimary, Abbas; Pachos, Jiannis K.

    2014-03-01

    Comprehensive no-go theorems show that information encoded over local two-dimensional topologically ordered systems cannot support macroscopic energy barriers, and hence will not maintain stable quantum information at finite temperatures for macroscopic time scales. However, it is still well motivated to study low-dimensional quantum memories due to their experimental amenability. Here we introduce a grid of defect lines to Kitaev's quantum double model where different anyonic excitations carry different masses. This setting produces a complex energy landscape which entropically suppresses the diffusion of excitations that cause logical errors. We show numerically that entropically suppressed errors give rise to superexponential inverse temperature scaling and polynomial system size scaling for small system sizes over a low-temperature regime. Curiously, these entropic effects are not present below a certain low temperature. We show that we can vary the system to modify this bound and potentially extend the described effects to zero temperature.

  17. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  18. Simulation of eye-tracker latency, spot size, and ablation pulse depth on the correction of higher order wavefront aberrations with scanning spot laser systems.

    PubMed

    Bueeler, Michael; Mrochen, Michael

    2005-01-01

    The aim of this theoretical work was to investigate the robustness of scanning spot laser treatments with different laser spot diameters and peak ablation depths in case of incomplete compensation of eye movements due to eye-tracker latency. Scanning spot corrections of 3rd to 5th Zernike order wavefront errors were numerically simulated. Measured eye-movement data were used to calculate the positioning error of each laser shot assuming eye-tracker latencies of 0, 5, 30, and 100 ms, and for the case of no eye tracking. The single spot ablation depth ranged from 0.25 to 1.0 microm and the spot diameter from 250 to 1000 microm. The quality of the ablation was rated by the postoperative surface variance and the Strehl intensity ratio, which was calculated after a low-pass filter was applied to simulate epithelial surface smoothing. Treatments performed with nearly ideal eye tracking (latency approximately 0) provide the best results with a small laser spot (0.25 mm) and a small ablation depth (250 microm). However, combinations of a large spot diameter (1000 microm) and a small ablation depth per pulse (0.25 microm) yield the better results for latencies above a certain threshold to be determined specifically. Treatments performed with tracker latencies in the order of 100 ms yield similar results as treatments done completely without eye-movement compensation. CONCWSIONS: Reduction of spot diameter was shown to make the correction more susceptible to eye movement induced error. A smaller spot size is only beneficial when eye movement is neutralized with a tracking system with a latency <5 ms.

  19. Numerical characterization of plasma breakdown in reversed field pinches

    NASA Astrophysics Data System (ADS)

    Peng, Yanli; Zhang, Ya; Mao, Wenzhe; Yang, Zhoujun; Hu, Xiwei; Jiang, Wei

    2018-02-01

    In the reversed field pinch, there is considerable interest in investigating the plasma breakdown. Indeed, the plasma formed during the breakdown may have an influence on the confinement and maintenance in the latter process. However, up to now there has been no related work, experimentally or in simulation, regarding plasma breakdown in reversed field pinch (RFP). In order to figure out the physical mechanism behind plasma breakdown, the effects of the toroidal and error magnetic field, as well as the loop voltage have been studied. We find that the error magnetic field cannot be neglected even though it is quite small in the short plasma breakdown phase. As the toroidal magnetic field increases, the averaged electron energy is reduced after plasma breakdown is complete, which is disadvantageous for the latter process. In addition, unlike the voltage limits in the tokamak, loop voltages can be quite high because there are no requirements for superconductivity. Volt-second consumption has a small difference under different loop voltages. The breakdown delay still exists in various loop voltage cases, but it is much shorter compared to that in the tokamak case. In all, successful breakdowns are possible in the RFP under a fairly broad range of parameters.

  20. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  1. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Alkhalifah, Tariq

    2018-07-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is highly accurate and efficient.

  2. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  3. Stability of finite difference numerical simulations of acoustic logging-while-drilling with different perfectly matched layer schemes

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Tao, Guo; Shang, Xue-Feng; Fang, Xin-Ding; Burns, Daniel R.

    2013-12-01

    In acoustic logging-while-drilling (ALWD) finite difference in time domain (FDTD) simulations, large drill collar occupies, most of the fluid-filled borehole and divides the borehole fluid into two thin fluid columns (radius ˜27 mm). Fine grids and large computational models are required to model the thin fluid region between the tool and the formation. As a result, small time step and more iterations are needed, which increases the cumulative numerical error. Furthermore, due to high impedance contrast between the drill collar and fluid in the borehole (the difference is >30 times), the stability and efficiency of the perfectly matched layer (PML) scheme is critical to simulate complicated wave modes accurately. In this paper, we compared four different PML implementations in a staggered grid finite difference in time domain (FDTD) in the ALWD simulation, including field-splitting PML (SPML), multiaxial PML(MPML), non-splitting PML (NPML), and complex frequency-shifted PML (CFS-PML). The comparison indicated that NPML and CFS-PML can absorb the guided wave reflection from the computational boundaries more efficiently than SPML and M-PML. For large simulation time, SPML, M-PML, and NPML are numerically unstable. However, the stability of M-PML can be improved further to some extent. Based on the analysis, we proposed that the CFS-PML method is used in FDTD to eliminate the numerical instability and to improve the efficiency of absorption in the PML layers for LWD modeling. The optimal values of CFS-PML parameters in the LWD simulation were investigated based on thousands of 3D simulations. For typical LWD cases, the best maximum value of the quadratic damping profile was obtained using one d 0. The optimal parameter space for the maximum value of the linear frequency-shifted factor ( α 0) and the scaling factor ( β 0) depended on the thickness of the PML layer. For typical formations, if the PML thickness is 10 grid points, the global error can be reduced to <1% using the optimal PML parameters, and the error will decrease as the PML thickness increases.

  4. Numerical ‘health check’ for scientific codes: the CADNA approach

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.

    2007-04-01

    Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.

  5. Consistency and convergence for numerical radiation conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.

  6. Optimum employment of satellite indirect soundings as numerical model input

    NASA Technical Reports Server (NTRS)

    Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.

    1981-01-01

    The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.

  7. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  8. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  9. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  10. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  11. Analysis of the Los Angeles Basin ground subsidence with InSAR data by independent component analysis approach

    NASA Astrophysics Data System (ADS)

    Xu, B.

    2017-12-01

    Interferometric Synthetic Aperture Radar (InSAR) has the advantages of high spatial resolution which enable measure line of sight (LOS) surface displacements with nearly complete spatial continuity and a satellite's perspective that permits large areas view of Earth's surface quickly and efficiently. However, using InSAR to observe long wavelength and small magnitude deformation signals is still significantly limited by various unmodeled errors sources i.e. atmospheric delays, orbit induced errors, Digital Elevation Model (DEM) errors. Independent component analysis (ICA) is a probabilistic method for separating linear mixed signals generated by different underlying physical processes.The signal sources which form the interferograms are statistically independent both in space and in time, thus, they can be separated by ICA approach.The seismic behavior in the Los Angeles Basin is active and the basin has experienced numerous moderate to large earthquakes since the early Pliocene. Hence, understanding the seismotectonic deformation in the Los Angeles Basin is important for analyzing seismic behavior. Compare with the tectonic deformations, nontectonic deformations due to groundwater and oil extraction may be mainly responsible for the surface deformation in the Los Angeles basin. Using the small baseline subset (SBAS) InSAR method, we extracted the surface deformation time series in the Los Angeles basin with a time span of 7 years (September 27, 2003-September 25,2010). Then, we successfully separate the atmospheric noise from InSAR time series and detect different processes caused by different mechanisms.

  12. Optical aberrations induced by subclinical decentrations of the ablation pattern

    NASA Astrophysics Data System (ADS)

    Mrochen, Michael; Kaemmerer, Maik; Riedel, Peter; Mierdel, Peter; Krinke, Hans-Eberhard; Seiler, Theo

    2000-06-01

    Purpose: The aim of this work was to study the effect of currently used ablation profiles along with eccentric ablations on the increase of higher order aberrations observed after PRK. Material and Methods: The optical aberrations of 10 eyes were tested before and after PRK. Refractive surgery was performed using a ArF-excimer laser system. In all cases, the ablation zone was 6 mm or larger. The spherical equivalent of the correction was ranging from -2.5 D to -6.0 D. The measured wavefront error was compared to numerical simulations done with the reduced eye model and currently used ablation profiles as well as compared with experimental results obtained from ablation on PMMA balls. Results: The aberration measurements result in a considerable change of the spherical- and coma-like wavefront errors. This result was in good correlation with the numerical simulations and the experimental results. Furthermore, it has been derived that the major contribution on the induced higher order aberrations are a result of the small decentration (less than 1.0 mm) of the ablation zone. Conclusions: Higher order spherical- and coma-like aberrations after PRK are mainly determined by the decentration of the ablation zone during laser refractive surgery. However, future laser systems should use efficient eye-tracking systems and aspherical ablation profiles to overcome this problem.

  13. Parametric electrical impedance tomography for measuring bone mineral density in the pelvis using a computational model.

    PubMed

    Kimel-Naor, Shani; Abboud, Shimon; Arad, Marina

    2016-08-01

    Osteoporosis is defined as bone microstructure deterioration resulting a decrease of bone's strength. Measured bone mineral density (BMD) constitutes the main tool for Osteoporosis diagnosis, management, and defines patient's fracture risk. In the present study, parametric electrical impedance tomography (pEIT) method was examined for monitoring BMD, using a computerized simulation model and preliminary real measurements. A numerical solver was developed to simulate surface potentials measured over a 3D computerized pelvis model. Varying cortical and cancellous BMD were simulated by changing bone conductivity and permittivity. Up to 35% and 16% change was found in the real and imaginary modules of the calculated potential, respectively, while BMD changes from 100% (normal) to 60% (Osteoporosis). Negligible BMD relative error was obtained with SNR>60 [dB]. Position changes errors indicate that for long term monitoring, measurement should be taken at the same geometrical configuration with great accuracy. The numerical simulations were compared to actual measurements that were acquired from a healthy male subject using a five electrodes belt bioimpedance device. The results suggest that pEIT may provide an inexpensive easy to use tool for frequent monitoring BMD in small clinics during pharmacological treatment, as a complementary method to DEXA test. Copyright © 2016. Published by Elsevier Ltd.

  14. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  15. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  16. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    NASA Astrophysics Data System (ADS)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id

    We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.

  18. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  19. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.

  20. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  1. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  2. The Evolution and Discharge of Electric Fields within a Thunderstorm

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Nisbet, John S.; Kasha, John R.

    1989-05-01

    A 3-dimensional electrical model for a thunderstorm is developed and finite difference approximations to the model are analyzed. If the spatial derivatives are approximated by a method akin to the ☐ scheme and if the temporal derivative is approximated by either a backward difference or the Crank-Nicholson scheme, we show that the resulting discretization is unconditionally stable. The forward difference approximation to the time derivative is stable when the time step is sufficiently small relative to the ratio between the permittivity and the conductivity. Max-norm error estimates for the discrete approximations are established. To handle the propagation of lightning, special numerical techniques are devised based on the Inverse Matrix Modification Formula and Cholesky updates. Numerical comparisons between the model and theoretical results of Wilson and Holzer-Saxon are presented. We also apply our model to a storm observed at the Kennedy Space Center on July 11, 1978.

  3. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, G.; Siegel, M.; Tanveer, S.

    1995-09-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. The situation is disastrous for numerical computation, as small roundoff errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. Themore » method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out. 47 refs., 10 figs., 1 tab.« less

  4. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  5. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  6. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  7. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  8. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  9. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  10. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  11. Creating Weather System Ensembles Through Synergistic Process Modeling and Machine Learning

    NASA Astrophysics Data System (ADS)

    Chen, B.; Posselt, D. J.; Nguyen, H.; Wu, L.; Su, H.; Braverman, A. J.

    2017-12-01

    Earth's weather and climate are sensitive to a variety of control factors (e.g., initial state, forcing functions, etc). Characterizing the response of the atmosphere to a change in initial conditions or model forcing is critical for weather forecasting (ensemble prediction) and climate change assessment. Input - response relationships can be quantified by generating an ensemble of multiple (100s to 1000s) realistic realizations of weather and climate states. Atmospheric numerical models generate simulated data through discretized numerical approximation of the partial differential equations (PDEs) governing the underlying physics. However, the computational expense of running high resolution atmospheric state models makes generation of more than a few simulations infeasible. Here, we discuss an experiment wherein we approximate the numerical PDE solver within the Weather Research and Forecasting (WRF) Model using neural networks trained on a subset of model run outputs. Once trained, these neural nets can produce large number of realization of weather states from a small number of deterministic simulations with speeds that are orders of magnitude faster than the underlying PDE solver. Our neural network architecture is inspired by the governing partial differential equations. These equations are location-invariant, and consist of first and second derivations. As such, we use a 3x3 lon-lat grid of atmospheric profiles as the predictor in the neural net to provide the network the information necessary to compute the first and second moments. Results indicate that the neural network algorithm can approximate the PDE outputs with high degree of accuracy (less than 1% error), and that this error increases as a function of the prediction time lag.

  12. Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2001-01-01

    A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.

  13. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  14. On the remote sensing of cloud properties from satellite infrared sounder data

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y. M.

    1984-01-01

    A method for remote sensing of cloud parameters by using infrared sounder data has been developed on the basis of the parameterized infrared transfer equation applicable to cloudy atmospheres. The method is utilized for the retrieval of the cloud height, amount, and emissivity in 11 micro m region. Numerical analyses and retrieval experiments have been carried out by utilizing the synthetic sounder data for the theoretical study. The sensitivity of the numerical procedures to the measurement and instrument errors are also examined. The retrieved results are physically discussed and numerically compared with the model atmospheres. Comparisons reveal that the recovered cloud parameters agree reasonably well with the pre-assumed values. However, for cases when relatively thin clouds and/or small cloud fractional cover within a field of view are present, the recovered cloud parameters show considerable fluctuations. Experiments on the proposed algorithm are carried out utilizing High Resolution Infrared Sounder (HIRS/2) data of NOAA 6 and TIROS-N. Results of experiments show reasonably good comparisons with the surface reports and GOES satellite images.

  15. Vacuum Stress in Schwarzschild Spacetime

    NASA Astrophysics Data System (ADS)

    Howard, Kenneth Webster

    Vacuum stress in the conformally invariant scalar field in the region exterior to the horizon of a Schwarzschild black hole is examined. In the Hartle-Hawking vacuum state <(phi)('2)> and are calculated. Covariant point-splitting renormalization is used, as is a mode sum expression for the Hartle-Hawking propagator. It is found that <(phi)('2)> separates naturally into two parts, a part that has a simple analytic form coinciding with the approximate expression of Whiting and Page, and a small remainder. The results of our numerical evaluation of the remainder agree with, but are more accurate than, those previously given by Fawcett. We find that also separates into two terms. The first coincides with the approximate expression obtained by Page with a Gaussian approximation to the proper time Green function. The second term, composed of sums over mode functions, is evaluated numerically. It is found that the total expression is in good qualitative agreement with Page's approximation. Our results disagree with previous numerical results given by Fawcett. The error in Fawcett's calculation is explained.

  16. Theoretical and numerical evaluation of polarimeter using counter-circularly-polarized-probing-laser under the coupling between Faraday and Cotton-Mouton effect.

    PubMed

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2016-04-01

    This study evaluated an effect of an coupling between the Faraday and Cotton-Mouton effect to a measurement signal of the Dodel-Kunz method which uses counter-circular-polarized probing-laser for measuring the Faraday effect. When the coupling is small (the Faraday effect is dominant and the characteristic eigenmodes are approximately circularly polarized), the measurement signal can be algebraically expressed and it is shown that the finite effect of the coupling is still significant. When the Faraday effect is not dominant, a numerical calculation is necessary. The numerical calculation under an ITER-like condition (Bt = 5.3 T, Ip = 15 MA, a = 2 m, ne = 10(20) m(-3) and λ = 119 μm) showed that difference between the pure Faraday rotation and the measurement signal of the Dodel-Kunz method was an order of one degree, which exceeds allowable error of ITER poloidal polarimeter. In conclusion, similar to other polarimeter techniques, the Dodel-Kunz method is not free from the coupling between the Faraday and Cotton-Mouton effect.

  17. The Chiral Separation Effect in quenched finite-density QCD

    NASA Astrophysics Data System (ADS)

    Puhr, Matthias; Buividovich, Pavel

    2018-03-01

    We present results of a study of the Chiral Separation Effect (CSE) in quenched finite-density QCD. Using a recently developed numerical method we calculate the conserved axial current for exactly chiral overlap fermions at finite density for the first time. We compute the anomalous transport coeffcient for the CSE in the confining and deconfining phase and investigate possible deviations from the universal value. In both phases we find that non-perturbative corrections to the CSE are absent and we reproduce the universal value for the transport coeffcient within small statistical errors. Our results suggest that the CSE can be used to determine the renormalisation factor of the axial current.

  18. Scattering of point particles by black holes: Gravitational radiation

    NASA Astrophysics Data System (ADS)

    Hopper, Seth; Cardoso, Vitor

    2018-02-01

    Gravitational waves can teach us not only about sources and the environment where they were generated, but also about the gravitational interaction itself. Here we study the features of gravitational radiation produced during the scattering of a pointlike mass by a black hole. Our results are exact (to numerical error) at any order in a velocity expansion, and are compared against various approximations. At large impact parameter and relatively small velocities our results agree to within percent level with various post-Newtonian and weak-field results. Further, we find good agreement with scaling predictions in the weak-field/high-energy regime. Lastly, we achieve striking agreement with zero-frequency estimates.

  19. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  20. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  1. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  2. Investigation of Error Patterns in Geographical Databases

    NASA Technical Reports Server (NTRS)

    Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.

  3. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  4. Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.

    1994-01-01

    Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.

  5. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  6. Station Keeping of Small Outboard-Powered Boats

    NASA Technical Reports Server (NTRS)

    Fisher, A. D.; VanZwieten, J. H., Jr.; VanZwieten, T. S.

    2010-01-01

    Three station keeping controllers have been developed which work to minimize displacement of a small outboard-powered vessel from a desired location. Each of these three controllers has a common initial layer that uses fixed-gain feedback control to calculate the desired heading of the vessel. A second control layer uses a common fixed-gain feedback controller to calculate the net forward thrust, one of two algorithms for controlling engine angle (Fixed-Gain Proportional-integral-derivative (PID) or PID with Adaptively Augmented Gains), and one of two algorithms for differential throttle control (Fixed-Gain PID and PID with Adaptive Differential Throttle gains), which work together to eliminate heading error. The three selected controllers are evaluated using a numerical simulation of a 33-foot center console vessel with twin outboards that is subject to wave, wind, and current disturbances. Each controller is tested for its ability to maintain position in the presence of three sets of environmental disturbances. These algorithms were tested with current velocity of 1.5 m/s, significant wave height of 0.5 m, and wind speeds of 2, 5, and 10 m/s. These values were chosen to model conditions a small vessel may experience in the Gulf Stream off of Fort Lauderdale. The Fixed-gain PID controller progressively got worse as wind speeds increased, while the controllers using adaptive methodologies showed consistent performance over all weather conditions and decreased heading error by as much as 20%. Thus, enhanced robustness to environmental changes has been gained by using an adaptive algorithm.

  7. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  8. A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus

    NASA Astrophysics Data System (ADS)

    Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei

    2005-01-01

    Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.

  9. A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory

    NASA Astrophysics Data System (ADS)

    Stolk, Christiaan C.

    2016-06-01

    We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.

  10. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  11. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.

  12. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  13. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  14. Control torque generation of a CMG-based small satellite with MTGAC system: a trade-off study

    NASA Astrophysics Data System (ADS)

    Salleh, M. B.; Suhadis, N. M.; Rajendran, P.; Mazlan, N. M.

    2018-05-01

    In this paper, the gimbal angle compensation method using magnetic control law has been adopted for a small satellite operating in low earth orbit under disturbance toques influence. Three light weight magnetic torquers have been used to generate the magnetic compensation torque to bring diverge gimbals at preferable angle. The magnetic control torque required to compensate the gimbal angle is based on the gimbal error rate which depends on the gimbal angle converging time. A simulation study has been performed without and with the MTGAC system to investigate the amount of generated control torque as a trade-off between the power consumption, attitude control performance and CMG dynamic performance. Numerical simulations show that the satellite with the MTGAC system generates more control torques which leads to the additional power requirement but in return results in a favorable attitude control performance and gimbal angle management.

  15. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  16. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  17. A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface

    NASA Astrophysics Data System (ADS)

    Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo

    2016-09-01

    The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.

  18. Quantitative neuroanatomy for connectomics in Drosophila

    PubMed Central

    Schneider-Mizell, Casey M; Gerhard, Stephan; Longair, Mark; Kazimiers, Tom; Li, Feng; Zwart, Maarten F; Champion, Andrew; Midgley, Frank M; Fetter, Richard D; Saalfeld, Stephan; Cardona, Albert

    2016-01-01

    Neuronal circuit mapping using electron microscopy demands laborious proofreading or reconciliation of multiple independent reconstructions. Here, we describe new methods to apply quantitative arbor and network context to iteratively proofread and reconstruct circuits and create anatomically enriched wiring diagrams. We measured the morphological underpinnings of connectivity in new and existing reconstructions of Drosophila sensorimotor (larva) and visual (adult) systems. Synaptic inputs were preferentially located on numerous small, microtubule-free 'twigs' which branch off a single microtubule-containing 'backbone'. Omission of individual twigs accounted for 96% of errors. However, the synapses of highly connected neurons were distributed across multiple twigs. Thus, the robustness of a strong connection to detailed twig anatomy was associated with robustness to reconstruction error. By comparing iterative reconstruction to the consensus of multiple reconstructions, we show that our method overcomes the need for redundant effort through the discovery and application of relationships between cellular neuroanatomy and synaptic connectivity. DOI: http://dx.doi.org/10.7554/eLife.12059.001 PMID:26990779

  19. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  20. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  1. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  2. On nonstationarity-related errors in modal combination rules of the response spectrum method

    NASA Astrophysics Data System (ADS)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  3. WISC-R Examiner Errors: Cause for Concern.

    ERIC Educational Resources Information Center

    Slate, John R.; Chick, David

    1989-01-01

    Clinical psychology graduate students (N=14) administered Wechsler Intelligence Scale for Children-Revised. Found numerous scoring and mechanical errors that influenced full-scale intelligence quotient scores on two-thirds of protocols. Particularly prone to error were Verbal subtests of Vocabulary, Comprehension, and Similarities. Noted specific…

  4. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  5. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE PAGES

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  6. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  7. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  8. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  9. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  10. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  11. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  12. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  13. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  14. Jacobi-Gauss-Lobatto collocation method for the numerical solution of 1+1 nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.

    2014-03-01

    A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.

  15. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  16. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    DOE PAGES

    Burr, P. A.; Cooper, M. W. D.

    2017-09-15

    Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less

  17. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, P. A.; Cooper, M. W. D.

    Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less

  18. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    NASA Astrophysics Data System (ADS)

    Burr, P. A.; Cooper, M. W. D.

    2017-09-01

    Small system sizes are a well-known source of error in density functional theory (DFT) calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite-size effects have been well characterized, but self-interaction of charge-neutral defects is often discounted or assumed to follow an asymptotic behavior and thus easily corrected with linear elastic theory. Here we show that elastic effects are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequately small supercells are used; moreover, the spurious self-interaction does not follow the behavior predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground-state structure of (charge-neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768, and 1500 atoms), and careful analysis determines that elastic, not electrostatic, effects are responsible. The spurious self-interaction was also observed in nonoxide ionic compounds irrespective of the computational method used, thereby resolving long-standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects is a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g., hybrid functionals) or when modeling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studied oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells: greater than 96 atoms.

  19. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  20. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    NASA Astrophysics Data System (ADS)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.

  1. Numerical simulation of KdV equation by finite difference method

    NASA Astrophysics Data System (ADS)

    Yokus, A.; Bulut, H.

    2018-05-01

    In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.

  2. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  3. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  4. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  5. Reduction of numerical diffusion in three-dimensional vortical flows using a coupled Eulerian/Lagrangian solution procedure

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.

  6. 2D granular flows with the μ(I) rheology and side walls friction: A well-balanced multilayer discretization

    NASA Astrophysics Data System (ADS)

    Fernández-Nieto, E. D.; Garres-Díaz, J.; Mangeney, A.; Narbona-Reina, G.

    2018-03-01

    We present here numerical modelling of granular flows with the μ (I) rheology in confined channels. The contribution is twofold: (i) a model to approximate the Navier-Stokes equations with the μ (I) rheology through an asymptotic analysis; under the hypothesis of a one-dimensional flow, this model takes into account side walls friction; (ii) a multilayer discretization following Fernández-Nieto et al. (2016) [20]. In this new numerical scheme, we propose an appropriate treatment of the rheological terms through a hydrostatic reconstruction which allows this scheme to be well-balanced and therefore to deal with dry areas. Based on academic tests, we first evaluate the influence of the width of the channel on the normal profiles of the downslope velocity thanks to the multilayer approach that is intrinsically able to describe changes from Bagnold to S-shaped (and vice versa) velocity profiles. We also check the well-balanced property of the proposed numerical scheme. We show that approximating side walls friction using single-layer models may lead to strong errors. Secondly, we compare the numerical results with experimental data on granular collapses. We show that the proposed scheme allows us to qualitatively reproduce the deposit in the case of a rigid bed (i.e. dry area) and that the error made by replacing the dry area by a small layer of material may be large if this layer is not thin enough. The proposed model is also able to reproduce the time evolution of the free surface and of the flow/no-flow interface. In addition, it reproduces the effect of erosion for granular flows over initially static material lying on the bed. This is possible when using a variable friction coefficient μ (I) but not with a constant friction coefficient.

  7. Investigation of the Dynamic Contact Angle Using a Direct Numerical Simulation Method.

    PubMed

    Zhu, Guangpu; Yao, Jun; Zhang, Lei; Sun, Hai; Li, Aifen; Shams, Bilal

    2016-11-15

    A large amount of residual oil, which exists as isolated oil slugs, remains trapped in reservoirs after water flooding. Numerous numerical studies are performed to investigate the fundamental flow mechanism of oil slugs to improve flooding efficiency. Dynamic contact angle models are usually introduced to simulate an accurate contact angle and meniscus displacement of oil slugs under a high capillary number. Nevertheless, in the oil slug flow simulation process, it is unnecessary to introduce the dynamic contact angle model because of a negligible change in the meniscus displacement after using the dynamic contact angle model when the capillary number is small. Therefore, a critical capillary number should be introduced to judge whether the dynamic contact model should be incorporated into simulations. In this study, a direct numerical simulation method is employed to simulate the oil slug flow in a capillary tube at the pore scale. The position of the interface between water and the oil slug is determined using the phase-field method. The capacity and accuracy of the model are validated using a classical benchmark: a dynamic capillary filling process. Then, different dynamic contact angle models and the factors that affect the dynamic contact angle are analyzed. The meniscus displacements of oil slugs with a dynamic contact angle and a static contact angle (SCA) are obtained during simulations, and the relative error between them is calculated automatically. The relative error limit has been defined to be 5%, beyond which the dynamic contact angle model needs to be incorporated into the simulation to approach the realistic displacement. Thus, the desired critical capillary number can be determined. A three-dimensional universal chart of critical capillary number, which functions as static contact angle and viscosity ratio, is given to provide a guideline for oil slug simulation. Also, a fitting formula is presented for ease of use.

  8. A Sensitivity Analysis of Circular Error Probable Approximation Techniques

    DTIC Science & Technology

    1992-03-01

    SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some

  9. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  10. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  11. Beyond Error Patterns: A Sociocultural View of Fraction Comparison Errors in Students with Mathematical Learning Disabilities

    ERIC Educational Resources Information Center

    Lewis, Katherine E.

    2016-01-01

    Although many students struggle with fractions, students with mathematical learning disabilities (MLDs) experience pervasive difficulties because of neurological differences in how they process numerical information. These students make errors that are qualitatively different than their typically achieving and low-achieving peers. This study…

  12. Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun

    2018-07-01

    People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.

  13. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  14. Multiplate Radiation Shields: Investigating Radiational Heating Errors

    NASA Astrophysics Data System (ADS)

    Richardson, Scott James

    1995-01-01

    Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.

  15. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  16. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  17. Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Jones, A. D.; Rhoades, A.

    2017-12-01

    Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.

  18. Higgs production via gluon fusion in k{sub T} factorisation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hautmann, F.; Jung, H.; Pandis, V.

    2011-07-15

    Theoretical studies of Higgs production via gluon fusion are frequently carried out in the limit where the top quark mass is much larger than the Higgs mass, an approximation which reduces the top quark loop to an effective vertex. We present a numerical analysis of the error thus introduced by performing a Monte Carlo calculation for gg{yields}h in k{sub T}-factorisation, using the parton shower generator CASCADE. By examining both inclusive and exclusive quantities, we find that retaining the top-mass dependence results in only a small enhancement of the cross-section. We then proceed to compare CASCADE to the collinear Monte Carlosmore » PYTHIA, MC-NLO and POWHEG.« less

  19. A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Akita, T.; Takaki, R.; Shima, E.

    2012-04-01

    An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.

  20. Asymptotic boundary conditions for dissipative waves: General theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.

  1. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  2. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    NASA Astrophysics Data System (ADS)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  3. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  4. Fast calculation of low altitude disturbing gravity for ballistics

    NASA Astrophysics Data System (ADS)

    Wang, Jianqiang; Wang, Fanghao; Tian, Shasha

    2018-03-01

    Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.

  5. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  6. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  7. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  8. Discussion: Numerical study on the entrainment of bed material into rapid landslides

    USGS Publications Warehouse

    Iverson, Richard M.

    2013-01-01

    A paper recently published in this journal (Pirulli & Pastor, 2012) uses numerical modelling to study the important problem of entrainment of bed material by landslides. Unfortunately, some of the basic equations employed in the study are flawed, because they violate the principle of linear momentum conservation. Similar errors exist in some other studies of entrainment, and the errors appear to stem from confusion about the role of bed-sediment inertia in differing frames of reference.

  9. Lateral charge transport from heavy-ion tracks in integrated circuit chips

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Schwartz, H. R.; Nevill, L. R.

    1988-01-01

    A 256K DRAM has been used to study the lateral transport of charge (electron-hole pairs) induced by direct ionization from heavy-ion tracks in an IC. The qualitative charge transport has been simulated using a two-dimensional numerical code in cylindrical coordinates. The experimental bit-map data clearly show the manifestation of lateral charge transport in the creation of adjacent multiple-bit errors from a single heavy-ion track. The heavy-ion data further demonstrate the occurrence of multiple-bit errors from single ion tracks with sufficient stopping power. The qualitative numerical simulation results suggest that electric-field-funnel-aided (drift) collection accounts for single error generated by an ion passing through a charge-collecting junction, while multiple errors from a single ion track are due to lateral diffusion of ion-generated charge.

  10. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems, ], based on an adaptive two stage Runge-Kutta-Gauss method with this discontinuous step-size policy.

  11. Preliminary evidence for performance enhancement following parietal lobe stimulation in Developmental Dyscalculia.

    PubMed

    Iuculano, Teresa; Cohen Kadosh, Roi

    2014-01-01

    Nearly 7% of the population exhibit difficulties in dealing with numbers and performing arithmetic, a condition named Developmental Dyscalculia (DD), which significantly affects the educational and professional outcomes of these individuals, as it often persists into adulthood. Research has mainly focused on behavioral rehabilitation, while little is known about performance changes and neuroplasticity induced by the concurrent application of brain-behavioral approaches. It has been shown that numerical proficiency can be enhanced by applying a small-yet constant-current through the brain, a non-invasive technique named transcranial electrical stimulation (tES). Here we combined a numerical learning paradigm with transcranial direct current stimulation (tDCS) in two adults with DD to assess the potential benefits of this methodology to remediate their numerical difficulties. Subjects learned to associate artificial symbols to numerical quantities within the context of a trial and error paradigm, while tDCS was applied to the posterior parietal cortex (PPC). The first subject (DD1) received anodal stimulation to the right PPC and cathodal stimulation to the left PPC, which has been associated with numerical performance's improvements in healthy subjects. The second subject (DD2) received anodal stimulation to the left PPC and cathodal stimulation to the right PPC, which has been shown to impair numerical performance in healthy subjects. We examined two indices of numerical proficiency: (i) automaticity of number processing; and (ii) mapping of numbers onto space. Our results are opposite to previous findings with non-dyscalculic subjects. Only anodal stimulation to the left PPC improved both indices of numerical proficiency. These initial results represent an important step to inform the rehabilitation of developmental learning disabilities, and have relevant applications for basic and applied research in cognitive neuroscience, rehabilitation, and education.

  12. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  13. Assessment of numerical techniques for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1989-01-01

    The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.

  14. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  15. Small-scale Scheimpflug lidar for aerosol extinction coefficient and vertical atmospheric transmittance detection.

    PubMed

    Sun, Guodong; Qin, Laian; Hou, Zaihong; Jing, Xu; He, Feng; Tan, Fengfu; Zhang, Silong

    2018-03-19

    In this paper, a new prototypical Scheimpflug lidar capable of detecting the aerosol extinction coefficient and vertical atmospheric transmittance at 1 km above the ground is described. The lidar system operates at 532 nm and can be used to detect aerosol extinction coefficients throughout an entire day. Then, the vertical atmospheric transmittance can be determined from the extinction coefficients with the equation of numerical integration in this area. CCD flat fielding of the image data is used to mitigate the effects of pixel sensitivity variation. An efficient method of two-dimensional wavelet transform according to a local threshold value has been proposed to reduce the Gaussian white noise in the lidar signal. Furthermore, a new iteration method of backscattering ratio based on genetic algorithm is presented to calculate the aerosol extinction coefficient and vertical atmospheric transmittance. Some simulations are performed to reduce the different levels of noise in the simulated signal in order to test the precision of the de-noising method and inversion algorithm. The simulation result shows that the root-mean-square errors of extinction coefficients are all less than 0.02 km -1 , and that the relative errors of the atmospheric transmittance between the model and inversion data are below 0.56% for all cases. The feasibility of the instrument and the inversion algorithm have also been verified by an optical experiment. The average relative errors of aerosol extinction coefficients between the Scheimpflug lidar and the conventional backscattering elastic lidar are 3.54% and 2.79% in the full overlap heights of two time points, respectively. This work opens up new possibilities of using a small-scale Scheimpflug lidar system for the remote sensing of atmospheric aerosols.

  16. Bounded Error Schemes for the Wave Equation on Complex Domains

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir

    1998-01-01

    This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.

  17. MP estimation applied to platykurtic sets of geodetic observations

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Zbigniew

    2017-06-01

    MP estimation is a method which concerns estimating of the location parameters when the probabilistic models of observations differ from the normal distributions in the kurtosis or asymmetry. The system of Pearson's distributions is the probabilistic basis for the method. So far, such a method was applied and analyzed mostly for leptokurtic or mesokurtic distributions (Pearson's distributions of types IV or VII), which predominate practical cases. The analyses of geodetic or astronomical observations show that we may also deal with sets which have moderate asymmetry or small negative excess kurtosis. Asymmetry might result from the influence of many small systematic errors, which were not eliminated during preprocessing of data. The excess kurtosis can be related with bigger or smaller (in relations to the Hagen hypothesis) frequency of occurrence of the elementary errors which are close to zero. Considering that fact, this paper focuses on the estimation with application of the Pearson platykurtic distributions of types I or II. The paper presents the solution of the corresponding optimization problem and its basic properties. Although platykurtic distributions are rare in practice, it was an interesting issue to find out what results can be provided by MP estimation in the case of such observation distributions. The numerical tests which are presented in the paper are rather limited; however, they allow us to draw some general conclusions.

  18. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  19. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  20. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  1. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  2. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  3. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  4. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  5. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  6. Photogrammetric discharge monitoring of small tropical mountain rivers - A case study at Rivière des Pluies, Réunion island

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Augereau, Emmanuel; Delacourt, Christophe; Bonnier, Julien

    2016-04-01

    Reliable discharge measurements are indispensable for an effective management of natural water resources and floods. Limitations of classical current meter profiling and stage-discharge ratings have stimulated the development of more accurate and efficient gauging techniques. While new discharge measurements technologies such as acoustic doppler current profilers and large-scale image particle velocimetry (LSPIV) have been developed and tested in numerous studies, the continuous monitoring of small mountain rivers and discharge dynamics during strong meteorological events remains challenging. More specifically LSPIV studies are often focused on short-term measurements during flood events and there are still very few studies that address its use for long-term monitoring of small mountain rivers. To fill this gap this study targets the development and testing of largely autonomous photogrammetric discharge measurement system with a special focus on the application to small mountain river with high discharge variability and a mobile riverbed in the tropics. It proposes several enhancements among previous LSPIV methods regarding camera calibration, more efficient processing in image geometry, the automatic detection of the water level as well as the statistical calibration and estimation of the discharge from multiple profiles. To account for changes in the bed topography the riverbed is surveyed repeatedly during the dry seasons using multi-view photogrammetry or terrestrial laser scanners. The presented case study comprises the analysis of several thousand videos spanning over two and a half year (2013-2015) to test the robustness and accuracy of different processing steps. An analysis of the obtained results suggests that the quality of the camera calibration reaches a sub-pixel accuracy. The median accuracy of the watermask detections is F1=0.82, whereas the precision is systematically higher than the recall. The resulting underestimation of the water surface area and level leads to a systematic underestimation of the discharge and error rates of up to 25 %. However, the bias can be effectively removed using a least-square cross-calibration which reduces the error to a MAE of 6.39% and a maximum error of 16.18%. Those error rates are significantly lower than the uncertainties among multiple profiles (30%) and illustrate the importance of the spatial averaging from multiple measurements. The study suggests that LSPIV can already be considered as a valuable tool for the monitoring of torrential flows, whereas further research is still needed to fully integrate night-time observation and stereo-photogrammetric capabilities.

  7. Vortical and acoustical mode coupling inside a porous tube with uniform wall suction.

    PubMed

    Jankowskia, T A; Majdalani, J

    2005-06-01

    This paper considers the oscillatory motion of gases inside a long porous tube of the closed-open type. In particular, the focus is placed on describing an analytical solution for the internal acoustico-vortical coupling that arises in the presence of appreciable wall suction. This unsteady field is driven by longitudinal oscillatory waves that are triggered by small unavoidable fluctuations in the wall suction speed. Under the assumption of small amplitude oscillations, the time-dependent governing equations are linearized through a regular perturbation of the dependent variables. Further application of the Helmholtz vector decomposition theorem enables us to discriminate between acoustical and vortical equations. After solving the wave equation for the acoustical contribution, the boundary-driven vortical field is considered. The method of matched-asymptotic expansions is then used to obtain a closed-form solution for the unsteady momentum equation developing from flow decomposition. An exact series expansion is also derived and shown to coincide with the numerical solution for the problem. The numerically verified end results suggest that the asymptotic scheme is capable of providing a sufficiently accurate solution. This is due to the error associated with the matched-asymptotic expansion being smaller than the error introduced in the Navier-Stokes linearization. A basis for comparison is established by examining the evolution of the oscillatory field in both space and time. The corresponding boundary-layer behavior is also characterized over a range of oscillation frequencies and wall suction velocities. In general, the current solution is found to exhibit features that are consistent with the laminar theory of periodic flows. By comparison to the Sexl profile in nonporous tubes, the critically damped solution obtained here exhibits a slightly smaller overshoot and depth of penetration. These features may be attributed to the suction effect that tends to attract the shear layers closer the wall.

  8. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  9. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  10. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  11. Problems Associated with Grid Convergence of Functionals

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Atkins, Harld L.

    2008-01-01

    The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order-property of a scheme are presented. The problem is studied within the context of the inviscid supersonic flow over a blunt body; however, the problem and solutions presented are not unique to this example.

  12. On Problems Associated with Grid Convergence of Functionals

    NASA Technical Reports Server (NTRS)

    Salas, Manuael D.; Atkins, Harold L

    2009-01-01

    The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order property of a scheme are presented. The problems are studied within the context of the inviscid supersonic flow over a blunt body; however, the problems and solutions presented are not unique to this example.

  13. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  14. Error in the determination of the deformed shape of prismatic beams using the double integration of curvature

    NASA Astrophysics Data System (ADS)

    Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko

    2017-07-01

    The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.

  15. Impact of spot charge inaccuracies in IMPT treatments.

    PubMed

    Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M

    2017-08-01

    Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.

  16. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  17. New Numerical Approaches To thermal Convection In A Compositionally Stratified Fluid

    NASA Astrophysics Data System (ADS)

    Puckett, E. G.; Turcotte, D. L.; Kellogg, L. H.; Lokavarapu, H. V.; He, Y.; Robey, J.

    2016-12-01

    Seismic imaging of the mantle has revealed large and small scale heterogeneities in the lower mantle; specifically structures known as large low shear velocity provinces (LLSVP) below Africa and the South Pacific. Most interpretations propose that the heterogeneities are compositional in nature, differing from the overlying mantle, an interpretation that would be consistent with chemical geodynamic models. The LLSVP's are thought to be very old, meaning they have persisted thoughout much of Earth's history. Numerical modeling of persistent compositional interfaces present challenges to even state-of-the-art numerical methodology. It is extremely difficult to maintain sharp composition boundaries which migrate and distort with time dependent fingering without compositional diffusion and / or artificial diffusion. The compositional boundary must persist indefinitely. In this work we present computations of an initial compositionally stratified fluid that is subject to a thermal gradient ΔT = T1 - T0 across the height D of a rectangular domain over a range of buoyancy numbers B and Rayleigh numbers Ra. In these computations we compare three numerical approaches to modeling the movement of two distinct, thermally driven, compositional fields; namely, a high-order Finte Element Method (FEM) that employs artifical viscosity to preserve the maximum and minimum values of the compositional field, a Discontinous Galerkin (DG) method with a Bound Preserving (BP) limiter, and a Volume-of-Fluid (VOF) interface tracking algorithm. Our computations demonstrate that the FEM approach has far too much numerical diffusion to yield meaningful results, the DGBP method yields much better resuts but with small amounts of each compositional field being (numerically) entrained within the other compositional field, while the VOF method maintains a sharp interface between the two compositions throughout the computation. In the figure we show a comparison of between the three methods for a computation made with B = 1.111 and Ra = 10,000 after the flow has reached 'steady state'. (R) the images computed with the standard FEM method (with artifical viscosity), (C) the images computed with the DGBP method (with no artifical viscosity or diffusion due to discretization errors) and (L) the images computed with the VOF algorithm.

  18. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  19. Numerical modeling of the thermoelectric cooler with a complementary equation for heat circulation in air gaps

    NASA Astrophysics Data System (ADS)

    Fang, En; Wu, Xiaojie; Yu, Yuesen; Xiu, Junrui

    2017-03-01

    In this paper, a numerical model is developed by combining thermodynamics with heat transfer theory. Taking inner and external multi-irreversibility into account, it is with a complementary equation for heat circulation in air gaps of a steady cooling system with commercial thermoelectric modules operating in refrigeration mode. With two modes concerned, the equation presents the heat flowing through air gaps which forms heat circulations between both sides of thermoelectric coolers (TECs). In numerical modelling, a TEC is separated as two temperature controlled constant heat flux reservoirs in a thermal resistance network. In order to obtain the parameter values, an experimental apparatus with a commercial thermoelectric cooler was built to characterize the performance of a TEC with heat source and sink assembly. At constant power dissipation, steady temperatures of heat source and both sides of the thermoelectric cooler were compared with those in a standard numerical model. The method displayed that the relationship between Φf and the ratio Φ_{c}'/Φ_{c} was linear as expected. Then, for verifying the accuracy of proposed numerical model, the data in another system were recorded. It is evident that the experimental results are in good agreement with simulation(proposed model) data at different heat transfer rates. The error is small and mainly results from the instabilities of thermal resistances with temperature change and heat flux, heat loss of the device vertical surfaces and measurements.

  20. Detecting genotyping errors and describing black bear movement in northern Idaho

    Treesearch

    Michael K. Schwartz; Samuel A. Cushman; Kevin S. McKelvey; Jim Hayden; Cory Engkjer

    2006-01-01

    Non-invasive genetic sampling has become a favored tool to enumerate wildlife. Genetic errors, caused by poor quality samples, can lead to substantial biases in numerical estimates of individuals. We demonstrate how the computer program DROPOUT can detect amplification errors (false alleles and allelic dropout) in a black bear (Ursus americanus) dataset collected in...

  1. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  2. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  3. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    NASA Technical Reports Server (NTRS)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  4. Computation of aerodynamic interference effects on oscillating airfoils with controls in ventilated subsonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Golberg, M. A.

    1979-01-01

    Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.

  5. Finite-difference time-domain modelling of through-the-Earth radio signal propagation

    NASA Astrophysics Data System (ADS)

    Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.

    2015-12-01

    This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.

  6. High Order Numerical Methods for the Investigation of the Two Dimensional Richtmyer-Meshkov Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, W-S; Gotllieb, D; Shu, C-W

    2001-11-26

    For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less

  7. The problem of complex eigensystems in the semianalytical solution for advancement of time in solute transport simulations: a new method using real arithmetic

    USGS Publications Warehouse

    Umari, Amjad M.J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  8. Robust estimation-free prescribed performance back-stepping control of air-breathing hypersonic vehicles without affine models

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi

    2016-11-01

    This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.

  9. Mixed finite-difference scheme for analysis of simply supported thick plates.

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1973-01-01

    A mixed finite-difference scheme is presented for the stress and free vibration analysis of simply supported nonhomogeneous and layered orthotropic thick plates. The analytical formulation is based on the linear, three-dimensional theory of orthotropic elasticity and a Fourier approach is used to reduce the governing equations to six first-order ordinary differential equations in the thickness coordinate. The governing equations possess a symmetric coefficient matrix and are free of derivatives of the elastic characteristics of the plate. In the finite difference discretization two interlacing grids are used for the different fundamental unknowns in such a way as to reduce both the local discretization error and the bandwidth of the resulting finite-difference field equations. Numerical studies are presented for the effects of reducing the interior and boundary discretization errors and of mesh refinement on the accuracy and convergence of solutions. It is shown that the proposed scheme, in addition to a number of other advantages, leads to highly accurate results, even when a small number of finite difference intervals is used.

  10. The Kaon B-parameter in mixed action chiral perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubin, C.; /Columbia U.; Laiho, Jack

    2006-09-01

    We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At one-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an {Omicron}(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of {Omicron}(a{sup 2}).more » This term, however, is not strictly due to taste-breaking, and is therefore also present in the expression for B{sub K} for pure G-W lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less

  11. Kaon B-parameter in mixed action chiral perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubin, C.; Laiho, Jack; Water, Ruth S. van de

    2007-02-01

    We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed-action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At 1-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an O(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of O(a{sup 2}). Thismore » term, however, is not strictly due to taste breaking, and is therefore also present in the expression for B{sub K} for pure Ginsparg-Wilson lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less

  12. Why does MP2 work?

    PubMed

    Fink, Reinhold F

    2016-11-14

    We show analytically and numerically that the performance of second order Møller-Plesset (MP) perturbation theory (PT), coupled-cluster (CC) theory, and other perturbation theory approaches can be rationalized by analyzing the wavefunctions of these methods. While rather large deviations for the individual contributions of configurations to the electron correlation energy are found for MP wavefunctions, they profit from an advantageous and robust error cancellation: The absolute contribution to the correlation energy is generally underestimated for the critical excitations with small energy denominators and all other doubly excited configurations where the two excited electrons are coupled to a singlet. This is balanced by an overestimation of the contribution of triplet-coupled double excitations to the correlation energy. The even better performance of spin-component-scaled-MP2 theory is explained by a similar error compensation effect. The wavefunction analysis for the lowest singlet states of H 2 O, CH 2 , CO, and Cu + shows the predicted trends for MP methods, rapid but biased convergence of CC theory as well as the substantial potential of linearized CC, or retaining the excitation-degree (RE)-PT.

  13. Multipath analysis diffraction calculations

    NASA Technical Reports Server (NTRS)

    Statham, Richard B.

    1996-01-01

    This report describes extensions of the Kirchhoff diffraction equation to higher edge terms and discusses their suitability to model diffraction multipath effects of a small satellite structure. When receiving signals, at a satellite, from the Global Positioning System (GPS), reflected signals from the satellite structure result in multipath errors in the determination of the satellite position. Multipath error can be caused by diffraction of the reflected signals and a method of calculating this diffraction is required when using a facet model of the satellite. Several aspects of the Kirchhoff equation are discussed and numerical examples, in the near and far fields, are shown. The vector form of the extended Kirchhoff equation, by adding the Larmor-Tedone and Kottler edge terms, is given as a mathematical model in an appendix. The Kirchhoff equation was investigated as being easily implemented and of good accuracy in the basic form, especially in phase determination. The basic Kirchhoff can be extended for higher accuracy if desired. A brief discussion of the method of moments and the geometric theory of diffraction is included, but seems to offer no clear advantage in implementation over the Kirchhoff for facet models.

  14. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  15. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  16. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2017-09-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  17. Effects of small variations of speed of sound in optoacoustic tomographic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deán-Ben, X. Luís; Ntziachristos, Vasilis; Razansky, Daniel, E-mail: dr@tum.de

    2014-07-15

    Purpose: Speed of sound difference in the imaged object and surrounding coupling medium may reduce the resolution and overall quality of optoacoustic tomographic reconstructions obtained by assuming a uniform acoustic medium. In this work, the authors investigate the effects of acoustic heterogeneities and discuss potential benefits of accounting for those during the reconstruction procedure. Methods: The time shift of optoacoustic signals in an acoustically heterogeneous medium is studied theoretically by comparing different continuous and discrete wave propagation models. A modification of filtered back-projection reconstruction is subsequently implemented by considering a straight acoustic rays model for ultrasound propagation. The results obtainedmore » with this reconstruction procedure are compared numerically and experimentally to those obtained assuming a heuristically fitted uniform speed of sound in both full-view and limited-view optoacoustic tomography scenarios. Results: The theoretical analysis showcases that the errors in the time-of-flight of the signals predicted by considering the straight acoustic rays model tend to be generally small. When using this model for reconstructing simulated data, the resulting images accurately represent the theoretical ones. On the other hand, significant deviations in the location of the absorbing structures are found when using a uniform speed of sound assumption. The experimental results obtained with tissue-mimicking phantoms and a mouse postmortem are found to be consistent with the numerical simulations. Conclusions: Accurate analysis of effects of small speed of sound variations demonstrates that accounting for differences in the speed of sound allows improving optoacoustic reconstruction results in realistic imaging scenarios involving acoustic heterogeneities in tissues and surrounding media.« less

  18. Designing an algorithm to preserve privacy for medical record linkage with error-prone data.

    PubMed

    Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen

    2014-01-20

    Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.

  19. Advanced technology development multi-color holography

    NASA Technical Reports Server (NTRS)

    Vikram, Chandra S.

    1994-01-01

    Several key aspects of multi-color holography and some non-conventional ways to study the holographic reconstructions are considered. The error analysis of three-color holography is considered in detail with particular example of a typical triglycine sulfate crystal growth situation. For the numerical analysis of the fringe patterns, a new algorithm is introduced with experimental verification using sugar-water solution. The role of the phase difference among component holograms is also critically considered with examples of several two- and three-color situations. The status of experimentation on two-color holography and fabrication of a small breadboard system is also reported. Finally, some successful demonstrations of unconventional ways to study holographic reconstructions are described. These methods are deflectometry and confocal optical processing using some Spacelab III holograms.

  20. Properties of Augmented Kohn-Sham Potential for Energy as Simple Sum of Orbital Energies.

    PubMed

    Zahariev, Federico; Levy, Mel

    2017-01-12

    A recent modification to the traditional Kohn-Sham method ( Levy , M. ; Zahariev , F. Phys. Rev. Lett. 2014 , 113 , 113002 ; Levy , M. ; Zahariev , F. Mol. Phys. 2016 , 114 , 1162 - 1164 ), which gives the ground-state energy as a direct sum of the occupied orbital energies, is discussed and its properties are numerically illustrated on representative atoms and ions. It is observed that current approximate density functionals tend to give surprisingly small errors for the highest occupied orbital energies that are obtained with the augmented potential. The appropriately shifted Kohn-Sham potential is the basic object within this direct-energy Kohn-Sham method and needs to be approximated. To facilitate approximations, several constraints to the augmented Kohn-Sham potential are presented.

  1. Optical tomography for flow visualization of the density field around a revolving helicopter rotor blade

    NASA Technical Reports Server (NTRS)

    Snyder, R.; Hesselink, L.

    1984-01-01

    In this paper, a tomographic procedure for reconstructing the density field around a helicopter rotor blade tip from remote optical line-of-sight measurements is discussed. Numerical model studies have been carried out to investigate the influence of the number of available views, limited width viewing, and ray bending on the reconstruction. Performance is measured in terms of the mean-square error. It is found that very good reconstructions can be obtained using only a small number of views even when the width of view is smaller than the spatial extent of the object. An iterative procedure is used to correct for ray bending due to refraction associated with the sharp density gradients (shocks).

  2. A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.

    PubMed

    Yang, Wenlun; Fu, Minyue

    2017-11-01

    Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Fractional-order gradient descent learning of BP neural networks with Caputo derivative.

    PubMed

    Wang, Jian; Wen, Yanqing; Gou, Yida; Ye, Zhenyun; Chen, Hua

    2017-05-01

    Fractional calculus has been found to be a promising area of research for information processing and modeling of some physical systems. In this paper, we propose a fractional gradient descent method for the backpropagation (BP) training of neural networks. In particular, the Caputo derivative is employed to evaluate the fractional-order gradient of the error defined as the traditional quadratic energy function. The monotonicity and weak (strong) convergence of the proposed approach are proved in detail. Two simulations have been implemented to illustrate the performance of presented fractional-order BP algorithm on three small datasets and one large dataset. The numerical simulations effectively verify the theoretical observations of this paper as well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Comparison of MRI segmentation techniques for measuring liver cyst volumes in autosomal dominant polycystic kidney disease.

    PubMed

    Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R

    To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  6. How long will asteroids on retrograde orbits survive?

    NASA Astrophysics Data System (ADS)

    Kankiewicz, Paweł; Włodarczyk, Ireneusz

    2018-05-01

    Generally, a common scenario for the origin of minor planets with high orbital inclinations does not exist. This applies especially to objects whose orbital inclinations are much greater than 90° (retrograde asteroids). Since the discovery of Dioretsa in 1999, approximately 100 small bodies now are classified as retrograde asteroids. A small number of them were reclassified as comets, due to cometary activity. There are only 25 multi-opposition retrograde asteroids, with a relatively large number of observations and well-determined orbits. We studied the orbital evolution of numbered and multi-opposition retrograde asteroids by numerical integration up to 1 Gy forward and backward in time. Additionally, we analyzed the propagation of orbital elements with the observational errors, determined dynamical lifetimes and studied their chaotic properties. Conclusively, we obtained quantitative parameters describing the long-term stability of orbits relating to the past and the future. In turn, we were able to estimate their lifetimes and how long these objects will survive in the Solar System.

  7. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  8. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    ERIC Educational Resources Information Center

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  9. Effects of data selection on the assimilation of AIRS data

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Brin, E.; Treadon, R.; Derber, J.; VanDelst, P.; DeSilva, A.; Marshall, J. Le; Poli, P.; Atlas, R.; Cruz, C.; hide

    2006-01-01

    The Atmospheric InfraRed Sounder (AIRS), flying aboard NASA's Earth Observing System (EOS) Aqua satellite with the Advanced Microwave Sounding Unit-A (AMSU-A), has been providing data for use in numerical weather prediction (NWP) and data assimilation systems (DAS) for over three years. The full AIRS data set is currently not transmitted in near-real-time (NRT) to the NWP centers. Instead, data sets with reduced spatial and spectral information are produced and made available in NRT. In this paper, we evaluate the use of different channel selections and error specifications. We achieved significant positive impact from the Aqua AIRS/AMSU-A combination in both hemispheres during our experimental time period of January 2003. The best results were obtained using a set of 156 channels that did not include any in the 6.7micron water vapor band. The latter have a large influence on both temperature and humidity analyses. If observation and background errors are not properly specified, the partitioning of temperature and humidity information from these channels will not be correct, and this can lead to a degradation in forecast skill. We found that changing the specified channel errors had a significant effect on the amount of data that entered into the analysis as a result of quality control thresholds that are related to the errors. However, changing the channel errors within a relatively small window did not significantly impact forecast skill with the 155 channel set. We also examined the effects of different types of spatial data reduction on assimilated data sets and NWP forecast skill. Whether we picked the center or the warmest AIRS pixel in a 3x3 array affected the amount of data ingested by the analysis but had a negligible impact on the forecast skill.

  10. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  11. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  12. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  13. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  14. Errors in finite-difference computations on curvilinear coordinate systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Thompson, J. F.

    1980-01-01

    Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.

  15. Evaluation of SMART sensor displays for multidimensional precision control of Space Shuttle remote manipulator

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Brown, J. W.; Lewis, J. L.

    1982-01-01

    An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.

  16. Error analysis of numerical gravitational waveforms from coalescing binary black holes

    NASA Astrophysics Data System (ADS)

    Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2016-03-01

    The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.

  17. A Comparison of Three PML Treatments for CAA (and CFD)

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2008-01-01

    In this paper we compare three Perfectly Matched Layer (PML) treatments by means of a series of numerical experiments, using common numerical algorithms, computational grids, and code implementations. These comparisons are with the Linearized Euler Equations, for base uniform base flow. We see that there are two very good PML candidates, and that can both control the introduced error. Furthermore, we also show that corners can be handled with essentially no increase in the introduced error, and that with a good PML, the outer boundary is the most significant source of err

  18. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  19. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  20. The BREAST-V: a unifying predictive formula for volume assessment in small, medium, and large breasts.

    PubMed

    Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio

    2013-07-01

    Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.

  1. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  2. DEM corrections on series of wrapped interferograms as a tool to improve deformation monitoring around Siling Co lake in Tibet.

    NASA Astrophysics Data System (ADS)

    Ducret, Gabriel; Doin, Marie-Pierre; Lasserre, Cécile; Guillaso, Stéphane; Twardzik, Cedric

    2010-05-01

    In order to increase our knowledge on the lithosphere rheological structure under the Tibetan plateau, we study the loading response due to lake Siling Co water level changes. The challenge here is to measure the deformation with an accuracy good enough to obtain a correct sensivity to model parameters. InSAR method in theory allow to observe the spatio-temporal pattern of deformation, however its exploitation is limited by unwrapping difficulties linked with temporal decorrelation and DEM errors in sloppy and partially incoherent areas. This lake is a large endhoreic lake at 4500~m elevation located North of the strike-slip right lateral Gyaring Co fault, and just to the south of the Bangong Nujiang suture zone, on which numerous left-lateral strike slip are branching. The Siling Co lake water level has strongly changed in the past, as testified by numerous traces of palaeo-shorelines, clearly marked until 60 m above present-day level. In the last years, the water level in this lake increased by about 1~m/yr, a remarkably fast rate given the large lake surface (1600~km2). The present-day ground subsidence associated to the water level increase is studied by InSAR using all ERS and Envisat archived data on track 219, obtained through the Dragon cooperation program. We chose to compute 750~km long differential interferograms centered on the lake to provide a good constraint on the reference. A redundant network of small baseline interferograms is computed with perpendicular baseline smaller than 500~m. The coherence is quickly lost with time (over one year), particularly to the North of the lake because of freeze-thaw cycles. Unwrapping thus becomes hazardous in this configuration, and fails on phase jumps created by DEM contrasts. The first work is to improve the simulated elevation field in radar geometry from the Digital Elevation Model (here SRTM) in order to exploit the interferometric phase in layover areas. Then, to estimate DEM error, we mix the Permanent Scattered and Small Baseline methods. The aim is to improve spatial and temporal coherence. We use as a reference strong and stable amplitude points or spatially coherent areas, scattered within the SAR scene. We calculate the relative elevation error of every point in the neighbourhood of reference points. A global inversion allows to perform spatial integration of local errors at the radar image scale. Finally, we evaluate how the DEM correct ion of wrapped interferograms improves the unwrapping step. Furthermore, to help unwrapping we also compute and then remove from the wrapped interferograms the residual orbital trend and the phase-elevation relationship due variations in atmospheric stratification. Stack of unwrapped small baseline interferograms show clearly the average subsidence rate around the lake of about 4 mm/yr associated to the present-day water level increase. To compare the observed deformation to the water level elevation changes, we extract from satellite images in the period 1972 to 2009 the water level changes. The deformation signal is discussed in terms of end-members visco-elastic models of the lithosphere and uppermost mantle.

  3. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  4. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  5. Nonlinear grid error effects on numerical solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, S. K.

    1980-01-01

    Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.

  6. Magnetic field errors tolerances of Nuclotron booster

    NASA Astrophysics Data System (ADS)

    Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet

    2018-04-01

    Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.

  7. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  8. Influence of hypo- and hyperthermia on death time estimation - A simulation study.

    PubMed

    Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G

    2017-09-01

    Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  10. On beyond the standard model for high explosives: challenges & obstacles to surmount

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph Ds

    2009-01-01

    Plastic-bonded explosives (PBX) are heterogeneous materials. Nevertheless, current explosive models treat them as homogeneous materials. To compensate, an empirically determined effective burn rate is used in place of a chemical reaction rate. A significant limitation of these models is that different burn parameters are needed for applications in different regimes; for example, shock initiation of a PBX at different initial temperatures or different initial densities. This is due to temperature fluctuations generated when a heterogeneous material is shock compressed. Localized regions of high temperatures are called hot spots. They dominate the reaction for shock initiation. The understanding of hot spotmore » generation and their subsequent evolution has been limited by the inability to measure transients on small spatial ({approx} 1 {micro}m) and small temporal ({approx} 1 ns) scales in the harsh environment of a detonation. With the advances in computing power, it is natural to try and gain an understanding of hot-spot initiation with numerical experiments based on meso-scale simulations that resolve material heterogeneities and utilize realistic chemical reaction rates. However, to capture the underlying physics correctly, such high resolution simulations will require more than fast computers with a large amount of memory. Here we discuss some of the issues that need to be addressed. These include dissipative mechanisms that generate hot spots, accurate thermal propceties for the equations of state of the reactants and products, and controlling numerical entropy error from shock impedance mismatches at material interfaces. The later can generate artificial hot spots and lead to premature reaction. Eliminating numerical hot spots is critical for shock initiation simulations due to the positive feedback between the energy release from reaction and the hydrodynamic flow.« less

  11. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  12. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    PubMed

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is ( n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way.

  13. 3-D decoupled inversion of complex conductivity data in the real number domain

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy C.; Thomle, Jonathan

    2018-01-01

    Complex conductivity imaging (also called induced polarization imaging or spectral induced polarization imaging when conducted at multiple frequencies) involves estimating the frequency-dependent complex electrical conductivity distribution of the subsurface. The superior diagnostic capabilities provided by complex conductivity spectra have driven advancements in mechanistic understanding of complex conductivity as well as modelling and inversion approaches over the past several decades. In this work, we demonstrate the theory and application for an approach to 3-D modelling and inversion of complex conductivity data in the real number domain. Beginning from first principles, we demonstrate how the equations for the real and imaginary components of the complex potential may be decoupled. This leads to a description of the real and imaginary source current terms, and a corresponding assessment of error arising from an assumption necessary to complete the decoupled modelling. We show that for most earth materials, which exhibit relatively small phases (e.g. less than 0.2 radians) in complex conductivity, these errors become insignificant. For higher phase materials, the errors may be quantified and corrected through an iterative procedure. We demonstrate the accuracy of numerical forward solutions by direct comparison to corresponding analytic solutions. We demonstrate the inversion using both synthetic and field examples with data collected over a waste infiltration trench, at frequencies ranging from 0.5 to 7.5 Hz.

  14. Small Atomic Orbital Basis Set First‐Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

    PubMed Central

    Sure, Rebecca; Brandenburg, Jan Gerit

    2015-01-01

    Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221

  15. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  16. Performance-limiting factors for x-ray free electron laser oscillator as a highly coherent, high spectral purity x-ray source

    NASA Astrophysics Data System (ADS)

    Park, Gunn Tae

    X-ray Free Electron Laser (XFEL) is a light source for coherent X-ray using the radiation from relativistic electrons and interaction between the two. In particular, XFEL oscillator(XFELO) uses optical cavity to repeatedly bring back the radiation to electron beam for the interaction. Its optimal performance, maximum single pass gain and minimum round trip loss, critically depends on cavity optics. In ideal case, the optimal performance would be achieved by the periodic radiation mode maximally overlapping with electron beam while the radiation mode is impinging on curved mirror that gives the radiation the focusing, below critical angle and angular divergence being kept small enough at each crystal for Bragg scattering, which is used for near-normal reflection. In reality, there exist various performance degrading factors in the cavity such as heat load on the crystal surface, misalignments of crystals and mirrors and mirror surface errors. In this thesis, we study via both analytic computation and numerical simulation the optimal design and performance of XFELO cavity in the presence of these factors. In optimal design, we implement asymmetric crystals into cavity to enhance the performance. In general, it has undesirable effect of pulse dilation. We present the configuration that avoids pulse length dilation. Then the effects of misalignments, focal length errors and mirror surface errors are to be evaluated and their tolerances are estimated. In particular, the simulation demonstrates that the effect of mirror surface errors on gain and round trip loss is well-within desired performance of XFELO.

  17. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasouli, C.; Abbasi Davani, F.; Rokrok, B.

    Plasma confinement using external magnetic field is one of the successful ways leading to the controlled nuclear fusion. Development and validation of the solution process for plasma equilibrium in the experimental toroidal fusion devices is the main subject of this work. Solution of the nonlinear 2D stationary problem as posed by the Grad-Shafranov equation gives quantitative information about plasma equilibrium inside the vacuum chamber of hot fusion devices. This study suggests solving plasma equilibrium equation which is essential in toroidal nuclear fusion devices, using a mesh-free method in a condition that the plasma boundary is unknown. The Grad-Shafranov equation hasmore » been solved numerically by the point interpolation collocation mesh-free method. Important features of this approach include truly mesh free, simple mathematical relationships between points and acceptable precision in comparison with the parametric results. The calculation process has been done by using the regular and irregular nodal distribution and support domains with different points. The relative error between numerical and analytical solution is discussed for several test examples such as small size Damavand tokamak, ITER-like equilibrium, NSTX-like equilibrium, and typical Spheromak.« less

  19. Numerical and Experimental Studies of the Natural Convection Flow Within a Horizontal Cylinder Subjected to a Uniformly Cold Wall Boundary Condition. Ph.D. Thesis - Va. Poly. Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Stewart, R. B.

    1972-01-01

    Numberical solutions are obtained for the quasi-compressible Navier-Stokes equations governing the time dependent natural convection flow within a horizontal cylinder. The early time flow development and wall heat transfer is obtained after imposing a uniformly cold wall boundary condition on the cylinder. Solutions are also obtained for the case of a time varying cold wall boundary condition. Windware explicit differ-encing is used for the numerical solutions. The viscous truncation error associated with this scheme is controlled so that first order accuracy is maintained in time and space. The results encompass a range of Grashof numbers from 8.34 times 10,000 to 7 times 10 to the 7th power which is within the laminar flow regime for gravitationally driven fluid flows. Experiments within a small scale instrumented horizontal cylinder revealed the time development of the temperature distribution across the boundary layer and also the decay of wall heat transfer with time.

  20. An analytical model with flexible accuracy for deep submicron DCVSL cells

    NASA Astrophysics Data System (ADS)

    Valiollahi, Sepideh; Ardeshir, Gholamreza

    2018-07-01

    Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.

  1. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  2. A numerical procedure for recovering true scattering coefficients from measurements with wide-beam antennas

    NASA Technical Reports Server (NTRS)

    Wang, Qinglin; Gogineni, S. P.

    1991-01-01

    A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.

  3. Verification results for the Spectral Ocean Wave Model (SOWM) by means of significant wave height measurements made by the GEOS-3 spacecraft

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.; Salfi, R. E.

    1978-01-01

    Significant wave heights estimated from the shape of the return pulse wave form of the altimeter on GEOS-3 for forty-four orbit segments obtained during 1975 and 1976 are compared with the significant wave heights specified by the spectral ocean wave model (SOWM), which is the presently operational numerical wave forecasting model at the Fleet Numerical Weather Central. Except for a number of orbit segments with poor agreement and larger errors, the SOWM specifications tended to be biased from 0.5 to 1.0 meters too low and to have RMS errors of 1.0 to 1.4 meters. The much fewer larger errors can be attributed to poor wind data for some parts of the Northern Hemisphere oceans. The bias can be attributed to the somewhat too light winds used to generate the waves in the model. Other sources of error are identified in the equatorial and trade wind areas.

  4. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  5. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  7. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  8. The Forced Soft Spring Equation

    ERIC Educational Resources Information Center

    Fay, T. H.

    2006-01-01

    Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…

  9. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  10. Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data

    PubMed Central

    Pal, Doyel; Chen, Tingting; Khethavath, Praveen

    2014-01-01

    Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786

  11. Most Common Formal Grammatical Errors Committed by Authors

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.

    2017-01-01

    Empirical evidence has been provided about the importance of avoiding American Psychological Association (APA) errors in the abstract, body, reference list, and table sections of empirical research articles. Specifically, authors are significantly more likely to have their manuscripts rejected for publication if they commit numerous APA…

  12. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  13. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  14. Error behavior of multistep methods applied to unstable differential systems

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1977-01-01

    The problem of modeling a dynamic system described by a system of ordinary differential equations which has unstable components for limited periods of time is discussed. It is shown that the global error in a multistep numerical method is the solution to a difference equation initial value problem, and the approximate solution is given for several popular multistep integration formulas. Inspection of the solution leads to the formulation of four criteria for integrators appropriate to unstable problems. A sample problem is solved numerically using three popular formulas and two different stepsizes to illustrate the appropriateness of the criteria.

  15. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    PubMed Central

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676

  16. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    PubMed

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.

  17. Quantum chemical approach for positron annihilation spectra of atoms and molecules beyond plane-wave approximation

    NASA Astrophysics Data System (ADS)

    Ikabata, Yasuhiro; Aiba, Risa; Iwanade, Toru; Nishizawa, Hiroaki; Wang, Feng; Nakai, Hiromi

    2018-05-01

    We report theoretical calculations of positron-electron annihilation spectra of noble gas atoms and small molecules using the nuclear orbital plus molecular orbital method. Instead of a nuclear wavefunction, the positronic wavefunction is obtained as the solution of the coupled Hartree-Fock or Kohn-Sham equation for a positron and the electrons. The molecular field is included in the positronic Fock operator, which allows an appropriate treatment of the positron-molecule repulsion. The present treatment succeeds in reproducing the Doppler shift, i.e., full width at half maximum (FWHM) of experimentally measured annihilation (γ-ray) spectra for molecules with a mean absolute error less than 10%. The numerical results indicate that the interpretation of the FWHM in terms of a specific molecular orbital is not appropriate.

  18. Control system to reduce the effects of friction in drive trains of continuous-path-positioning systems. [Patent application

    DOEpatents

    Green, W.L.

    1980-12-01

    An improved continuous-path-positioning servo-control system is provided for reducing the effects of friction arising at very low cutting speeds in the drive trains of numerically controlled cutting machines, and the like. The improvement comprises a feed forward network for altering the gain of the servo-control loop at low positioning velocities to prevent stick-slip movement of the cutting tool holder being positioned by the control system. The feed forward network shunts conventional lag-compensators in the control loop, or loops, so that the error signal used for positioning varies linearly when the value is small, but being limited for larger values. Thus, at higher positioning speeds there is little effect of the added component upon the control being achieved.

  19. Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen

    2005-11-01

    Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China

  20. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  1. Advanced Numerical-Algebraic Thinking: Constructing the Concept of Covariation as a Prelude to the Concept of Function

    ERIC Educational Resources Information Center

    Hitt, Fernando; Morasse, Christian

    2009-01-01

    Introduction: In this document we stress the importance of developing in children a structure for advanced numerical-algebraic thinking that can provide an element of control when solving mathematical situations. We analyze pupils' conceptions that induce errors in algebra due to a lack of control in connection with their numerical thinking. We…

  2. On vertical advection truncation errors in terrain-following numerical models: Comparison to a laboratory model for upwelling over submarine canyons

    NASA Astrophysics Data System (ADS)

    Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.

    2003-01-01

    Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.

  3. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

    NASA Astrophysics Data System (ADS)

    Johnson, Joseph

    2016-03-01

    We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.

  4. A Quadratic Spring Equation

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2010-01-01

    Through numerical investigations, we study examples of the forced quadratic spring equation [image omitted]. By performing trial-and-error numerical experiments, we demonstrate the existence of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions, investigate the resonance boundary in the [omega]…

  5. A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Zhang, Guoyu; Huang, Chengming; Li, Meng

    2018-04-01

    We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.

  6. Expert system for automatically correcting OCR output

    NASA Astrophysics Data System (ADS)

    Taghva, Kazem; Borsack, Julie; Condit, Allen

    1994-03-01

    This paper describes a new expert system for automatically correcting errors made by optical character recognition (OCR) devices. The system, which we call the post-processing system, is designed to improve the quality of text produced by an OCR device in preparation for subsequent retrieval from an information system. The system is composed of numerous parts: an information retrieval system, an English dictionary, a domain-specific dictionary, and a collection of algorithms and heuristics designed to correct as many OCR errors as possible. For the remaining errors that cannot be corrected, the system passes them on to a user-level editing program. This post-processing system can be viewed as part of a larger system that would streamline the steps of taking a document from its hard copy form to its usable electronic form, or it can be considered a stand alone system for OCR error correction. An earlier version of this system has been used to process approximately 10,000 pages of OCR generated text. Among the OCR errors discovered by this version, about 87% were corrected. We implement numerous new parts of the system, test this new version, and present the results.

  7. Systematic study of error sources in supersonic skin-friction balance measurements

    NASA Technical Reports Server (NTRS)

    Allen, J. M.

    1976-01-01

    An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.

  8. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  9. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  10. Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2005-01-01

    The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.

  11. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  12. Modeling Morphogenesis with Reaction-Diffusion Equations Using Galerkin Spectral Methods

    DTIC Science & Technology

    2002-05-06

    reaction- diffusion equation is a difficult problem in analysis that will not be addressed here. Errors will also arise from numerically approx solutions to...the ODEs. When comparing the approximate solution to actual reaction- diffusion systems found in nature, we must also take into account errors that...

  13. Lexical and Semantic Binding in Verbal Short-Term Memory

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Frankish, Clive R.; Ralph, Matthew A. Lambon

    2006-01-01

    Semantic dementia patients make numerous phoneme migration errors in their immediate serial recall of poorly comprehended words. In this study, similar errors were induced in the word recall of healthy participants by presenting unpredictable mixed lists of words and nonwords. This technique revealed that lexicality, word frequency, imageability,…

  14. Proposed Interventions to Decrease the Frequency of Missed Test Results

    ERIC Educational Resources Information Center

    Wahls, Terry L.; Cram, Peter

    2009-01-01

    Numerous studies have identified that delays in diagnosis related to the mishandling of abnormal test results are an import contributor to diagnostic errors. Factors contributing to missed results included organizational factors, provider factors and patient-related factors. At the diagnosis error conference continuing medical education conference…

  15. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  16. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  17. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  18. Evidence of universal inverse-third power law for the shielding-induced fractional decrease in apex field enhancement factor at large spacings: a response via accurate Laplace-type calculations

    NASA Astrophysics Data System (ADS)

    de Assis, Thiago A.; Dall’Agnol, Fernando F.

    2018-05-01

    Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, , and a pair, . We show that small relative errors in (i.e. ), due to the finite domain size, are sufficient to alter the functional dependence , where c is the distance from the emitters in the pair. We show that obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size (, say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, , with m  =  3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.

  19. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  20. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  1. Limitations of the paraxial Debye approximation.

    PubMed

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  2. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  3. Ion beam machining error control and correction for small scale optics.

    PubMed

    Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi

    2011-09-20

    Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.

  4. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  6. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  7. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  8. URANS simulations of the tip-leakage cavitating flow with verification and validation procedures

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin

    2018-04-01

    In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.

  9. Optimisation d'un systeme d'antigivrage a air chaud pour aile d'avion basee sur la methode du krigeage dual

    NASA Astrophysics Data System (ADS)

    Hannat, Ridha

    The aim of this thesis is to apply a new methodology of optimization based on the dual kriging method to a hot air anti-icing system for airplanes wings. The anti-icing system consists of a piccolo tube placed along the span of the wing, in the leading edge area. The hot air is injected through small nozzles and impact on the inner wall of the wing. The objective function targeted by the optimization is the effectiveness of the heat transfer of the anti-icing system. This heat transfer effectiveness is regarded as being the ratio of the wing inner wall heat flux and the sum of all the nozzles heat flows of the anti-icing system. The methodology adopted to optimize an anti-icing system consists of three steps. The first step is to build a database according to the Box-Behnken design of experiment. The objective function is then modeled by the dual kriging method and finally the SQP optimization method is applied. One of the advantages of the dual kriging is that the model passes exactly through all measurement points, but it can also take into account the numerical errors and deviates from these points. Moreover, the kriged model can be updated at each new numerical simulation. These features of the dual kriging seem to give a good tool to build the response surfaces necessary for the anti-icing system optimization. The first chapter presents a literature review and the optimization problem related to the antiicing system. Chapters two, three and four present the three articles submitted. Chapter two is devoted to the validation of CFD codes used to perform the numerical simulations of an anti-icing system and to compute the conjugate heat transfer (CHT). The CHT is calculated by taking into account the external flow around the airfoil, the internal flow in the anti-icing system, and the conduction in the wing. The heat transfer coefficient at the external skin of the airfoil is almost the same if the external flow is taken into account or no. Therefore, only the internal flow is considered in the following articles. Chapter three concerns the design of experiment (DoE) matrix and the construction of a second order parametric model. The objective function model is based on the Box-Behnken DoE. The parametric model that results from numerical simulations serve for comparison with the kriged model of the third article. Chapter four applies the dual kriging method to model the heat transfer effectiveness of the anti-icing system and use the model for optimization. The possibility of including the numerical error in the results is explored. For the test cases studied, introduction of the numerical error in the optimization process does not improve the results. Dual kriging method is also used to model the distribution of the local heat flux and to interpolate the local heat flux corresponding to the optimal design of the anti-icing system.

  10. Uncertainty of InSAR velocity fields for measuring long-wavelength displacement

    NASA Astrophysics Data System (ADS)

    Fattahi, H.; Amelung, F.

    2014-12-01

    Long-wavelength artifacts in InSAR data are the main limitation to measure long-wavelength displacement; they are traditionally attributed mainly to the inaccuracy of the satellite orbits (orbital errors). However, most satellites are precisely tracked resulting in uncertainties of orbits of 2-10 cm. Orbits of these satellites are thus precise enough to obtain precise velocity fields with uncertainties better than 1 mm/yr/100 km for older satellites (e.g. Envisat) and better than 0.2 mm/yr/100 km for modern satellites (e.g. TerraSAR-X and Sentinel-1) [Fattahi & Amelung, 2014]. Such accurate velocity fields are achievable if long-wavelength artifacts from sources other than orbital errors are identified and corrected for. We present a modified Small Baseline approach to measure long-wavelength deformation and evaluate the uncertainty of these measurements. We use a redundant network of interferograms for detection and correction of unwrapping errors to ensure the unbiased estimation of phase history. We distinguish between different sources of long-wavelength artifacts and correct those introduced by atmospheric delay, topographic residuals, timing errors, processing approximations and hardware issues. We evaluate the uncertainty of the velocity fields using a covariance matrix with the contributions from orbital errors and residual atmospheric delay. For contributions from the orbital errors we consider the standard deviation of velocity gradients in range and azimuth directions as a function of orbital uncertainty. For contributions from the residual atmospheric delay we use several approaches including the structure functions of InSAR time-series epochs, the predicted delay from numerical weather models and estimated wet delay from optical imagery. We validate this InSAR approach for measuring long-wavelength deformation by comparing InSAR velocity fields over ~500 km long swath across the southern San Andreas fault system with independent GPS velocities and examine the estimated uncertainties in several non-deforming areas. We show the efficiency of the approach to study the continental deformation across the Chaman fault system at the western Indian plate boundary. Ref: Fattahi, H., & Amelung, F., (2014), InSAR uncertainty due to orbital errors, Geophys, J. Int (in press).

  11. The development of estimated methodology for interfacial adhesion of semiconductor coatings having an enormous mismatch extent

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Chun; Huang, Pei-Chen

    2018-05-01

    The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.

  12. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  13. Performance Analysis of Local Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, Xin T.

    2018-03-01

    Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.

  14. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  15. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  16. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  17. Adaptive Control of Small Outboard-Powered Boats for Survey Applications

    NASA Technical Reports Server (NTRS)

    VanZwieten, T.S.; VanZwieten, J.H.; Fisher, A.D.

    2009-01-01

    Four autopilot controllers have been developed in this work that can both hold a desired heading and follow a straight line. These PID, adaptive PID, neuro-adaptive, and adaptive augmenting control algorithms have all been implemented into a numerical simulation of a 33-foot center console vessel with wind, waves, and current disturbances acting in the perpendicular (across-track) direction of the boat s desired trajectory. Each controller is tested for its ability to follow a desired heading in the presence of these disturbances and then to follow a straight line at two different throttle settings for the same disturbances. These controllers were tuned for an input thrust of 2000 N and all four controllers showed good performance with none of the controllers significantly outperforming the others when holding a constant heading and following a straight line at this engine thrust. Each controller was then tested for a reduced engine thrust of 1200 N per engine where each of the three adaptive controllers reduced heading error and across-track error by approximately 50% after a 300 second tuning period when compared to the fixed gain PID, showing that significant robustness to changes in throttle setting was gained by using an adaptive algorithm.

  18. Coherent control of molecular alignment of homonuclear diatomic molecules by analytically designed laser pulses.

    PubMed

    Zou, Shiyang; Sanz, Cristina; Balint-Kurti, Gabriel G

    2008-09-28

    We present an analytic scheme for designing laser pulses to manipulate the field-free molecular alignment of a homonuclear diatomic molecule. The scheme is based on the use of a generalized pulse-area theorem and makes use of pulses constructed around two-photon resonant frequencies. In the proposed scheme, the populations and relative phases of the rovibrational states of the molecule are independently controlled utilizing changes in the laser intensity and in the carrier-envelope phase difference, respectively. This allows us to create the correct coherent superposition of rovibrational states needed to achieve optimal molecular alignment. The validity and efficiency of the scheme are demonstrated by explicit application to the H(2) molecule. The analytically designed laser pulses are tested by exact numerical solutions of the time-dependent Schrodinger equation including laser-molecule interactions to all orders of the field strength. The design of a sequence of pulses to further enhance molecular alignment is also discussed and tested. It is found that the rotating wave approximation used in the analytic design of the laser pulses leads to small errors in the prediction of the relative phase of the rotational states. It is further shown how these errors may be easily corrected.

  19. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  20. Attitude control with realization of linear error dynamics

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1993-01-01

    An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.

  1. Achieving High Reliability in Histology:  An Improvement Series to Reduce Errors.

    PubMed

    Heher, Yael K; Chen, Yigu; Pyatibrat, Sergey; Yoon, Edward; Goldsmith, Jeffrey D; Sands, Kenneth E

    2016-11-01

    Despite sweeping medical advances in other fields, histology processes have by and large remained constant over the past 175 years. Patient label identification errors are a known liability in the laboratory and can be devastating, resulting in incorrect diagnoses and inappropriate treatment. The objective of this study was to identify vulnerable steps in the histology workflow and reduce the frequency of labeling errors (LEs). In this 36-month study period, a numerical step key (SK) was developed to capture LEs. The two most prevalent root causes were targeted for Lean workflow redesign: manual slide printing and microtome cutting. The numbers and rates of LEs before and after interventions were compared to evaluate the effectiveness of interventions. Following the adoption of a barcode-enabled laboratory information system, the error rate decreased from a baseline of 1.03% (794 errors in 76,958 cases) to 0.28% (107 errors in 37,880 cases). After the implementation of an innovative ice tool box, allowing single-piece workflow for histology microtome cutting, the rate came down to 0.22% (119 errors in 54,342 cases). The study pointed out the importance of tracking and understanding LEs by using a simple numerical SK and quantified the effectiveness of two customized Lean interventions. Overall, a 78.64% reduction in LEs and a 35.28% reduction in time spent on rework have been observed since the study began. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  3. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  4. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  5. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  6. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  7. Real-time orbit estimation for ATS-6 from redundant attitude sensors

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.

    1975-01-01

    A program installed in the ATSOCC on-line computer operates with attitude sensor data to produce a smoothed real-time orbit estimate. This estimate is obtained from a Kalman filter which enables the estimate to be maintained in the absence of T/M data. The results are described of analytical and numerical investigations into the sensitivity of Control Center output to the position errors resulting from the real-time estimation. The results of the numerical investigation, which used several segments of ATS-6 data gathered during the Sensor Data Acquisition run on August 19, 1974, show that the implemented system can achieve absolute position determination with an error of about 100 km, implying pointing errors of less than 0.2 deg in latitude and longitude. This compares very favorably with ATS-6 specifications of approximately 0.5 deg in latitude-longitude.

  8. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  9. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  10. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  11. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  12. Metric Identification and Protocol Development for Characterizing DNAPL Source Zone Architecture and Associated Plume Response

    DTIC Science & Technology

    2013-09-01

    M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression

  13. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  14. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  15. A study of potential sources of linguistic ambiguity in written work instructions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.

    This report describes the results of a small experimental study that investigated potential sources of ambiguity in written work instructions (WIs). The English language can be highly ambiguous because words with different meanings can share the same spelling. Previous studies in the nuclear weapons complex have shown that ambiguous WIs can lead to human error, which is a major cause for concern. To study possible sources of ambiguity in WIs, we determined which of the recommended action verbs in the DOE and BWXT writer's manuals have numerous meanings to their intended audience, making them potentially ambiguous. We used cognitive psychologymore » techniques to conduct a survey in which technicians who use WIs in their jobs indicated the first meaning that came to mind for each of the words. Although the findings of this study are limited by the small number of respondents, we identified words that had many different meanings even within this limited sample. WI writers should pay particular attention to these words and to their most frequent meanings so that they can avoid ambiguity in their writing.« less

  16. Performance evaluation of receive-diversity free-space optical communications over correlated Gamma-Gamma fading channels.

    PubMed

    Yang, Guowei; Khalighi, Mohammad-Ali; Ghassemlooy, Zabih; Bourennane, Salah

    2013-08-20

    The efficacy of spatial diversity in practical free-space optical communication systems is impaired by the fading correlation among the underlying subchannels. We consider in this paper the generation of correlated Gamma-Gamma random variables in view of evaluating the system outage probability and bit-error-rate under the condition of correlated fading. Considering the case of receive-diversity systems with intensity modulation and direct detection, we propose a set of criteria for setting the correlation coefficients on the small- and large-scale fading components based on scintillation theory. We verify these criteria using wave-optics simulations and further show through Monte Carlo simulations that we can effectively neglect the correlation corresponding to the small-scale turbulence in most practical systems, irrespective of the specific turbulence conditions. This has not been clarified before, to the best of our knowledge. We then present some numerical results to illustrate the effect of fading correlation on the system performance. Our conclusions can be generalized to the cases of multiple-beam and multiple-beam multiple-aperture systems.

  17. An embedded mesh method using piecewise constant multipliers with stabilization: mathematical and numerical aspects

    DOE PAGES

    Puso, M. A.; Kokko, E.; Settgast, R.; ...

    2014-10-22

    An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less

  18. Numerical Computation of Homogeneous Slope Stability

    PubMed Central

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927

  19. Numerical computation of homogeneous slope stability.

    PubMed

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  20. Density functional theory and chromium: Insights from the dimers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Würdemann, Rolf; Kristoffersen, Henrik H.; Moseler, Michael

    2015-03-28

    The binding in small Cr clusters is re-investigated, where the correct description of the dimer in three charge states is used as criterion to assign the most suitable density functional theory approximation. The difficulty in chromium arises from the subtle interplay between energy gain from hybridization and energetic cost due to exchange between s and d based molecular orbitals. Variations in published bond lengths and binding energies are shown to arise from insufficient numerical representation of electron density and Kohn-Sham wave-functions. The best functional performance is found for gradient corrected (GGA) functionals and meta-GGAs, where we find severe differences betweenmore » functionals from the same family due to the importance of exchange. Only the “best fit” from Bayesian error estimation is able to predict the correct energetics for all three charge states unambiguously. With this knowledge, we predict small bond-lengths to be exclusively present in Cr{sub 2} and Cr{sub 2}{sup −}. Already for the dimer cation, solely long bond-lengths appear, similar to what is found in the trimer and in chromium bulk.« less

  1. Inverse free steering law for small satellite attitude control and power tracking with VSCMGs

    NASA Astrophysics Data System (ADS)

    Malik, M. S. I.; Asghar, Sajjad

    2014-01-01

    Recent developments in integrated power and attitude control systems (IPACSs) for small satellite, has opened a new dimension to more complex and demanding space missions. This paper presents a new inverse free steering approach for integrated power and attitude control systems using variable-speed single gimbal control moment gyroscope. The proposed inverse free steering law computes the VSCMG steering commands (gimbal rates and wheel accelerations) such that error signal (difference in command and output) in feedback loop is driven to zero. H∞ norm optimization approach is employed to synthesize the static matrix elements of steering law for a static state of VSCMG. Later these matrix elements are suitably made dynamic in order for the adaptation. In order to improve the performance of proposed steering law while passing through a singular state of CMG cluster (no torque output), the matrix element of steering law is suitably modified. Therefore, this steering law is capable of escaping internal singularities and using the full momentum capacity of CMG cluster. Finally, two numerical examples for a satellite in a low earth orbit are simulated to test the proposed steering law.

  2. Determination by Small-angle X-ray Scattering of Pore Size Distribution in Nanoporous Track-etched Polycarbonate Membranes

    NASA Astrophysics Data System (ADS)

    Jonas, A. M.; Legras, R.; Ferain, E.

    1998-03-01

    Nanoporous track-etched membranes with narrow pore size distributions and average pore size diameters tunable from 100 to 1000 Åare produced by the chemical etching of latent tracks in polymer films after irradiation by a beam of accelerated heavy ions. Nanoporous membranes are used for highly demanding filtration purposes, or as templates to obtain metallic or polymeric nanowires (L. Piraux et al., Nucl. Instr. Meth. Phys. Res. 1997, B131, 357). Such applications call for developments in nanopore size characterization techniques. In this respect, we report on the characterization by small-angle X-ray scattering (SAXS) of nanopore size distribution (nPSD) in polycarbonate track-etched membranes. The obtention of nPSD requires inverting an ill-conditioned inhomogeneous equation. We present different numerical routes to overcome the amplification of experimental errors in the resulting solutions, including a regularization technique allowing to obtain the nPSD without a priori knowledge of its shape. The effect of deviations from cylindrical pore shape on the resulting distributions are analyzed. Finally, SAXS results are compared to results obtained by electron microscopy and conductometry.

  3. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  4. Approaches to Treating Student Written Errors

    ERIC Educational Resources Information Center

    Tran, Thu H.

    2013-01-01

    Second language writing teachers face numerous challenges when providing feedback on student writing. There may be so many problems in the writing that is almost impossible for them to focus on or they may constantly seek a better method of giving feedback on student written errors. This paper attempts to provide second language writing teachers…

  5. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  6. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  7. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  8. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...

    2017-06-26

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  9. Numerical Algorithm for Delta of Asian Option

    PubMed Central

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  10. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  11. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    PubMed

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  12. Design of algorithms for a dispersive hyperbolic problem

    NASA Technical Reports Server (NTRS)

    Roe, Philip L.; Arora, Mohit

    1991-01-01

    In order to develop numerical schemes for stiff problems, a model of relaxing heat flow is studied. To isolate those errors unavoidably associated with discretization, a method of characteristics is developed, containing three free parameters depending on the stiffness ratio. It is shown that such 'decoupled' schemes do not take into account the interaction between the wave families, and hence result in incorrect wavespeeds. Schemes can differ by up to two orders of magnitude in their rms errors, even while maintaining second-order accuracy. 'Coupled' schemes which account for the interactions are developed to obtain two additional free parameters. Numerical results are given for several decoupled and coupled schemes.

  13. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    PubMed

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  14. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  15. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  16. Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements

    NASA Astrophysics Data System (ADS)

    Arntsen, B.

    2017-12-01

    The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.

  17. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    NASA Astrophysics Data System (ADS)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  18. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  19. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  20. New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda

    2014-05-01

    The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.

  1. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan

    2012-10-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.

  2. Programmable Numerical Function Generators: Architectures and Synthesis Method

    DTIC Science & Technology

    2005-08-01

    generates HDL (Hardware Descrip- tion Language) code from the design specification described by Scilab [14], a MATLAB-like numerical calculation soft...cad.com/Error-NFG/. [14] Scilab 3.0, INRIA-ENPC, France, http://scilabsoft.inria.fr/ [15] M. J. Schulte and J. E. Stine, “Approximating elementary functions

  3. Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li

    2016-01-01

    The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.

  4. Use of qualitative and quantitative information in neural networks for assessing agricultural chemical contamination of domestic wells

    USGS Publications Warehouse

    Mishra, A.; Ray, C.; Kolpin, D.W.

    2004-01-01

    A neural network analysis of agrichemical occurrence in groundwater was conducted using data from a pilot study of 192 small-diameter drilled and driven wells and 115 dug and bored wells in Illinois, a regional reconnaissance network of 303 wells across 12 Midwestern states, and a study of 687 domestic wells across Iowa. Potential factors contributing to well contamination (e.g., depth to aquifer material, well depth, and distance to cropland) were investigated. These contributing factors were available in either numeric (actual or categorical) or descriptive (yes or no) format. A method was devised to use the numeric and descriptive values simultaneously. Training of the network was conducted using a standard backpropagation algorithm. Approximately 15% of the data was used for testing. Analysis indicated that training error was quite low for most data. Testing results indicated that it was possible to predict the contamination potential of a well with pesticides. However, predicting the actual level of contamination was more difficult. For pesticide occurrence in drilled and driven wells, the network predictions were good. The performance of the network was poorer for predicting nitrate occurrence in dug and bored wells. Although the data set for Iowa was large, the prediction ability of the trained network was poor, due to descriptive or categorical input parameters, compared with smaller data sets such as that for Illinois, which contained more numeric information.

  5. Weighted Non-linear Compact Schemes for the Direct Numerical Simulation of Compressible, Turbulent Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Baeder, James D.

    2014-01-21

    A new class of compact-reconstruction weighted essentially non-oscillatory (CRWENO) schemes were introduced (Ghosh and Baeder in SIAM J Sci Comput 34(3): A1678–A1706, 2012) with high spectral resolution and essentially non-oscillatory behavior across discontinuities. The CRWENO schemes use solution-dependent weights to combine lower-order compact interpolation schemes and yield a high-order compact scheme for smooth solutions and a non-oscillatory compact scheme near discontinuities. The new schemes result in lower absolute errors, and improved resolution of discontinuities and smaller length scales, compared to the weighted essentially non-oscillatory (WENO) scheme of the same order of convergence. Several improvements to the smoothness-dependent weights, proposed inmore » the literature in the context of the WENO schemes, address the drawbacks of the original formulation. This paper explores these improvements in the context of the CRWENO schemes and compares the different formulations of the non-linear weights for flow problems with small length scales as well as discontinuities. Simplified one- and two-dimensional inviscid flow problems are solved to demonstrate the numerical properties of the CRWENO schemes and its different formulations. Canonical turbulent flow problems—the decay of isotropic turbulence and the shock-turbulence interaction—are solved to assess the performance of the schemes for the direct numerical simulation of compressible, turbulent flows« less

  6. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  7. Multipole moments in the effective fragment potential method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  8. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  9. Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains

    NASA Astrophysics Data System (ADS)

    Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.

    2017-12-01

    In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.

  10. Multipole moments in the effective fragment potential method

    DOE PAGES

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.; ...

    2017-02-17

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  11. Diagnosing Cognitive Errors: Statistical Pattern Classification and Recognition Approach

    DTIC Science & Technology

    1985-01-01

    often produces several different erroneous rules. For example, when adding two fractions with different denominators, many students add the numerators ...common denominator and add the numerators . As listed in Tatsuoka (1984a), there are eleven different erroneous rules which result from a misconception...the score of five. These patterns correspond to different values of 42 (Tatsuoka, 1985) The numerator of 42 is divided into two parts in Equation (5

  12. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  13. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  14. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  15. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  16. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  17. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  18. Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhou, P.; Yan, H. J.

    2017-12-01

    In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.

  19. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  20. A multiphysical ensemble system of numerical snow modelling

    NASA Astrophysics Data System (ADS)

    Lafaysse, Matthieu; Cluzet, Bertrand; Dumont, Marie; Lejeune, Yves; Vionnet, Vincent; Morin, Samuel

    2017-05-01

    Physically based multilayer snowpack models suffer from various modelling errors. To represent these errors, we built the new multiphysical ensemble system ESCROC (Ensemble System Crocus) by implementing new representations of different physical processes in the deterministic coupled multilayer ground/snowpack model SURFEX/ISBA/Crocus. This ensemble was driven and evaluated at Col de Porte (1325 m a.s.l., French alps) over 18 years with a high-quality meteorological and snow data set. A total number of 7776 simulations were evaluated separately, accounting for the uncertainties of evaluation data. The ability of the ensemble to capture the uncertainty associated to modelling errors is assessed for snow depth, snow water equivalent, bulk density, albedo and surface temperature. Different sub-ensembles of the ESCROC system were studied with probabilistic tools to compare their performance. Results show that optimal members of the ESCROC system are able to explain more than half of the total simulation errors. Integrating members with biases exceeding the range corresponding to observational uncertainty is necessary to obtain an optimal dispersion, but this issue can also be a consequence of the fact that meteorological forcing uncertainties were not accounted for. The ESCROC system promises the integration of numerical snow-modelling errors in ensemble forecasting and ensemble assimilation systems in support of avalanche hazard forecasting and other snowpack-modelling applications.

  1. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  2. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  3. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  4. Multisynchronization of Coupled Heterogeneous Genetic Oscillator Networks via Partial Impulsive Control.

    PubMed

    He, Ding-Xin; Ling, Guang; Guan, Zhi-Hong; Hu, Bin; Liao, Rui-Quan

    2018-02-01

    This paper focuses on the collective dynamics of multisynchronization among heterogeneous genetic oscillators under a partial impulsive control strategy. The coupled nonidentical genetic oscillators are modeled by differential equations with uncertainties. The definition of multisynchronization is proposed to describe some more general synchronization behaviors in the real. Considering that each genetic oscillator consists of a large number of biochemical molecules, we design a more manageable impulsive strategy for dynamic networks to achieve multisynchronization. Not all the molecules but only a small fraction of them in each genetic oscillator are controlled at each impulsive instant. Theoretical analysis of multisynchronization is carried out by the control theory approach, and a sufficient condition of partial impulsive controller for multisynchronization with given error bounds is established. At last, numerical simulations are exploited to demonstrate the effectiveness of our results.

  5. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  6. Light diffusion in N-layered turbid media: steady-state domain.

    PubMed

    Liemert, André; Kienle, Alwin

    2010-01-01

    We deal with light diffusion in N-layered turbid media. The steady-state diffusion equation is solved for N-layered turbid media having a finite or an infinitely thick N'th layer. Different refractive indices are considered in the layers. The Fourier transform formalism is applied to derive analytical solutions of the fluence rate in Fourier space. The inverse Fourier transform is calculated using four different methods to test their performance and accuracy. Further, to avoid numerical errors, approximate formulas in Fourier space are derived. Fast solutions for calculation of the spatially resolved reflectance and transmittance from the N-layered turbid media ( approximately 10 ms) with small relative differences (<10(-7)) are found. Additionally, the solutions of the diffusion equation are compared to Monte Carlo simulations for turbid media having up to 20 layers.

  7. Finite grid radius and thickness effects on retarding potential analyzer measured suprathermal electron density and temperature

    NASA Technical Reports Server (NTRS)

    Knudsen, William C.

    1992-01-01

    The effect of finite grid radius and thickness on the electron current measured by planar retarding potential analyzers (RPAs) is analyzed numerically. Depending on the plasma environment, the current is significantly reduced below that which is calculated using a theoretical equation derived for an idealized RPA having grids with infinite radius and vanishingly small thickness. A correction factor to the idealized theoretical equation is derived for the Pioneer Venus (PV) orbiter RPA (ORPA) for electron gasses consisting of one or more components obeying Maxwell statistics. The error in density and temperature of Maxwellian electron distributions previously derived from ORPA data using the theoretical expression for the idealized ORPA is evaluated by comparing the densities and temperatures derived from a sample of PV ORPA data using the theoretical expression with and without the correction factor.

  8. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  9. Resistance fail strain gage technology as applied to composite materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1985-01-01

    Existing strain gage technologies as applied to orthotropic composite materials are reviewed. The bonding procedures, transverse sensitivity effects, errors due to gage misalignment, and temperature compensation methods are addressed. Numerical examples are included where appropriate. It is shown that the orthotropic behavior of composites can result in experimental error which would not be expected based on practical experience with isotropic materials. In certain cases, the transverse sensitivity of strain gages and/or slight gage misalignment can result in strain measurement errors.

  10. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  11. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  12. Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion

    DTIC Science & Technology

    2015-03-27

    adjustment using IMU rotation information, the accuracy of the yaw, pitch and roll is limited and numerical errors can be as high as 1e-4 depending on...due to either zero mean, Gaussian noise and/or bias in the IMU measured yaw, pitch and roll angles. It is possible that when errors in these...requires both the information on how the camera is mounted to the IMU /aircraft and the measured yaw, pitch and roll at the time of the first image

  13. Numerical tilting compensation in microscopy based on wavefront sensing using transport of intensity equation method

    NASA Astrophysics Data System (ADS)

    Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu

    2018-03-01

    Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.

  14. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  15. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  16. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    NASA Astrophysics Data System (ADS)

    Yang, Qingcheng; To, Albert C.

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.

  17. Particle image velocimetry measurements of Mach 3 turbulent boundary layers at low Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Brooks, J. M.; Gupta, A. K.; Smith, M. S.; Marineau, E. C.

    2018-05-01

    Particle image velocimetry (PIV) measurements of Mach 3 turbulent boundary layers (TBL) have been performed under low Reynolds number conditions, Re_τ =200{-}1000, typical of direct numerical simulations (DNS). Three reservoir pressures and three measurement locations create an overlap in parameter space at one research facility. This allows us to assess the effects of Reynolds number, particle response and boundary layer thickness separate from facility specific experimental apparatus or methods. The Morkovin-scaled streamwise fluctuating velocity profiles agree well with published experimental and numerical data and show a small standard deviation among the nine test conditions. The wall-normal fluctuating velocity profiles show larger variations which appears to be due to particle lag. Prior to the current study, no detailed experimental study characterizing the effect of Stokes number on attenuating wall-normal fluctuating velocities has been performed. A linear variation is found between the Stokes number ( St) and the relative error in wall-normal fluctuating velocity magnitude (compared to hot wire anemometry data from Klebanoff, Characteristics of Turbulence in a Boundary Layer with Zero Pressure Gradient. Tech. Rep. NACA-TR-1247, National Advisory Committee for Aeronautics, Springfield, Virginia, 1955). The relative error ranges from about 10% for St=0.26 to over 50% for St=1.06. Particle lag and spatial resolution are shown to act as low-pass filters on the fluctuating velocity power spectral densities which limit the measurable energy content. The wall-normal component appears more susceptible to these effects due to the flatter spectrum profile which indicates that there is additional energy at higher wave numbers not measured by PIV. The upstream inclination and spatial correlation extent of coherent turbulent structures agree well with published data including those using krypton tagging velocimetry (KTV) performed at the same facility.

  18. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    USGS Publications Warehouse

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  19. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  20. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

Top