NASA Astrophysics Data System (ADS)
Takahashi, N.; Okei, K.; Nakatsuka, T.
Accuracies of numerical Fourier and Hankel transforms are examined with the Takahasi-Mori theory of error evaluation. The higher Moliere terms both for spatial and projected distributions derived by these methods agree very well with those derived analytically. The methods will be valuable to solve other transport problems concerning fast charged particles.
Accuracy of numerically produced compensators.
Thompson, H; Evans, M D; Fallone, B G
1999-01-01
A feasibility study is performed to assess the utility of a computer numerically controlled (CNC) mill to produce compensating filters for conventional clinical use and for the delivery of intensity-modulated beams. A computer aided machining (CAM) software is used to assist in the design and construction of such filters. Geometric measurements of stepped and wedged surfaces are made to examine the accuracy of surface milling. Molds are milled and filled with molten alloy to produce filters, and both the molds and filters are examined for consistency and accuracy. Results show that the deviation of the filter surfaces from design does not exceed 1.5%. The effective attenuation coefficient is measured for CadFree, a cadmium-free alloy, in a 6 MV photon beam. The effective attenuation coefficients at the depth of maximum dose (1.5 cm) and at 10 cm in solid water phantom are found to be 0.546 cm-1 and 0.522 cm-1, respectively. Further attenuation measurements are made with Cerrobend to assess the variations of the effective attenuation coefficient with field size and source-surface distance. The ability of the CNC mill to accurately produce surfaces is verified with dose profile measurements in a 6 MV photon beam. The test phantom is composed of a 10 degrees polystyrene wedge and a 30 degrees polystyrene wedge, presenting both a sharp discontinuity and sloped surfaces. Dose profiles, measured at the depth of compensation (10 cm) beneath the test phantom and beneath a flat phantom, are compared to those produced by a commercial treatment planning system. Agreement between measured and predicted profiles is within 2%, indicating the viability of the system for filter production. PMID:10100166
NASA Astrophysics Data System (ADS)
Chevallier, L.
2010-11-01
Tests are presented of the 1D Accelerated Lambda Iteration method, which is widely used for solving the radiative transfer equation for a stellar atmosphere. We use our ARTY code as a reference solution and tables for these tests are provided. We model a static idealized stellar atmosphere, which is illuminated on its inner face and where internal sources are distributed with weak or strong gradients. This is an extension of published tests for a slab without incident radiation and gradients. Typical physical conditions for the continuum radiation and spectral lines are used, as well as typical values for the numerical parameters in order to reach a 1% accuracy. It is shown that the method is able to reach such an accuracy for most cases but the spatial discretization has to be refined for strong gradients and spectral lines, beyond the scope of realistic stellar atmospheres models. Discussion is provided on faster methods.
Schubert, Frank; Wiggenhauser, Herbert; Lausch, Regine
2004-04-01
In impact-echo testing of finite concrete structures, reflections of Rayleigh and body waves from lateral boundaries significantly affect time-domain signals and spectra. In the present paper we demonstrate by numerical simulations and experimental measurements at a concrete specimen that these reflections can lead to systematic errors in thickness determination. These effects depend not only on the dimensions of the specimen, but also on the location of the actual measuring point and on the duration of the detected time-domain signal. PMID:15047403
Numerical accuracy of mean-field calculations in coordinate space
NASA Astrophysics Data System (ADS)
Ryssens, W.; Heenen, P.-H.; Bender, M.
2015-12-01
Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1984-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1985-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
Numerical planetary and lunar ephemerides - Present status, precision and accuracies
NASA Technical Reports Server (NTRS)
Standish, E. Myles, Jr.
1986-01-01
Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.
Results from Numerical General Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.
2011-01-01
For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Halo abundance matching: accuracy and conditions for numerical convergence
NASA Astrophysics Data System (ADS)
Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan
2015-03-01
Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.
Accuracy of results with NASTRAN modal synthesis
NASA Technical Reports Server (NTRS)
Herting, D. N.
1978-01-01
A new method for component mode synthesis was developed for installation in NASTRAN level 17.5. Results obtained from the new method are presented, and these results are compared with existing modal synthesis methods.
NASA Astrophysics Data System (ADS)
Bailey, Brian N.
2016-07-01
When Lagrangian stochastic models for turbulent dispersion are applied to complex atmospheric flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behaviour in the numerical solution. Here we discuss numerical strategies for solving the non-linear Langevin-based particle velocity evolution equation that eliminate such unphysical behaviour in both Reynolds-averaged and large-eddy simulation applications. Extremely large or `rogue' particle velocities are caused when the numerical integration scheme becomes unstable. Such instabilities can be eliminated by using a sufficiently small integration timestep, or in cases where the required timestep is unrealistically small, an unconditionally stable implicit integration scheme can be used. When the generalized anisotropic turbulence model is used, it is critical that the input velocity covariance tensor be realizable, otherwise unphysical behaviour can become problematic regardless of the integration scheme or size of the timestep. A method is presented to ensure realizability, and thus eliminate such behaviour. It was also found that the numerical accuracy of the integration scheme determined the degree to which the second law of thermodynamics or `well-mixed condition' was satisfied. Perhaps more importantly, it also determined the degree to which modelled Eulerian particle velocity statistics matched the specified Eulerian distributions (which is the ultimate goal of the numerical solution). It is recommended that future models be verified by not only checking the well-mixed condition, but perhaps more importantly by checking that computed Eulerian statistics match the Eulerian statistics specified as inputs.
Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms
NASA Astrophysics Data System (ADS)
Pürrer, Michael; LVC Collaboration
2016-03-01
We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.
NASA Astrophysics Data System (ADS)
Kuramoto, Kiyoshi; Umemoto, Takafumi; Ishiwatari, Masaki
2013-08-01
Hydrodynamic escape of hydrogen driven by solar extreme ultraviolet (EUV) radiation heating is numerically simulated by using the constrained interpolation profile scheme, a high-accuracy scheme for solving the one-dimensional advection equation. For a wide range of hydrogen number densities at the lower boundary and solar EUV fluxes, more than half of EUV heating energy is converted to mechanical energy of the escaping hydrogen. Less energy is lost by downward thermal conduction even giving low temperature for the atmospheric base. This result differs from a previous numerical simulation study that yielded much lower escape rates by employing another scheme in which relatively strong numerical diffusion is implemented. Because the solar EUV heating effectively induces hydrogen escape, the hydrogen mixing ratio was likely to have remained lower than 1 vol% in the anoxic Earth atmosphere during the Archean era.
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.
Numerical taxonomy on data: Experimental results
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
"Certified" Laboratory Practitioners and the Accuracy of Laboratory Test Results.
ERIC Educational Resources Information Center
Boe, Gerard P.; Fidler, James R.
1988-01-01
An attempt to replicate a study of the accuracy of test results of medical laboratories was unsuccessful. Limitations of the obtained data prevented the research from having satisfactory internal validity, so no formal report was published. External validity of the study was also limited because the systematic random sample of 78 licensed…
Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis
NASA Astrophysics Data System (ADS)
Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani
2010-06-01
The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.
Technology Transfer Automated Retrieval System (TEKTRAN)
When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...
Numerical simulations of catastrophic disruption: Recent results
NASA Technical Reports Server (NTRS)
Benz, W.; Asphaug, E.; Ryan, E. V.
1994-01-01
Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.
Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification
Blottner, F.G.; Lopez, A.R.
1998-10-01
This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.
Accuracy assessment of contextual classification results for vegetation mapping
NASA Astrophysics Data System (ADS)
Thoonen, Guy; Hufkens, Koen; Borre, Jeroen Vanden; Spanhove, Toon; Scheunders, Paul
2012-04-01
A new procedure for quantitatively assessing the geometric accuracy of thematic maps, obtained from classifying hyperspectral remote sensing data, is presented. More specifically, the methodology is aimed at the comparison between results from any of the currently popular contextual classification strategies. The proposed procedure characterises the shapes of all objects in a classified image by defining an appropriate reference and a new quality measure. The results from the proposed procedure are represented in an intuitive way, by means of an error matrix, analogous to the confusion matrix used in traditional thematic accuracy representation. A suitable application for the methodology is vegetation mapping, where lots of closely related and spatially connected land cover types are to be distinguished. Consequently, the procedure is tested on a heathland vegetation mapping problem, related to Natura 2000 habitat monitoring. Object-based mapping and Markov Random Field classification results are compared, showing that the selected Markov Random Fields approach is more suitable for the fine-scale problem at hand, which is confirmed by the proposed procedure.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.
2013-08-01
Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now
Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems
NASA Astrophysics Data System (ADS)
Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.
2015-05-01
The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.
Maximizing the accuracy of field-derived numeric nutrient criteria in water quality regulations.
McLaughlin, Douglas B
2014-01-01
High levels of the nutrients nitrogen and phosphorus can cause unhealthy biological or ecological conditions in surface waters and prevent the attainment of their designated uses. Regulatory agencies are developing numeric criteria for these nutrients in an effort to ensure that the surface waters in their jurisdictions remain healthy and productive, and that water quality standards are met. These criteria are often derived using field measurements that relate nutrient concentrations and other water quality conditions to expected biological responses such as undesirable growth or changes in aquatic plant and animal communities. Ideally, these numeric criteria can be used to accurately "diagnose" ecosystem health and guide management decisions. However, the degree to which numeric nutrient criteria are useful for decision making depends on how accurately they reflect the status or risk of nutrient-related biological impairments. Numeric criteria that have little predictive value are not likely to be useful for managing nutrient concerns. This paper presents information on the role of numeric nutrient criteria as biological health indicators, and the potential benefits of sufficiently accurate criteria for nutrient management. In addition, it describes approaches being proposed or adopted in states such as Florida and Maine to improve the accuracy of numeric criteria and criteria-based decisions. This includes a preference for developing site-specific criteria in cases where sufficient data are available, and the use of nutrient concentration and biological response criteria together in a framework to support designated use attainment decisions. Together with systematic planning during criteria development, the accuracy of field-derived numeric nutrient criteria can be assessed and maximized as a part of an overall effort to manage nutrient water quality concerns. PMID:24123826
NASA Astrophysics Data System (ADS)
Lugaz, Noé; Roussev, Ilia I.; Gombosi, Tamas I.
2011-07-01
Transients in the heliosphere, including coronal mass ejections (CMEs) and corotating interaction regions can be imaged to large heliocentric distances by heliospheric imagers (HIs), such as the HIs onboard STEREO and SMEI onboard Coriolis. These observations can be analyzed using different techniques to derive the CME speed and direction. In this paper, we use a three-dimensional (3-D) magneto-hydrodynamic (MHD) numerical simulation to investigate one of these methods, the fitting method of Sheeley et al. (1999) and Rouillard et al. (2008). Because we use a 3-D simulation, we can determine with great accuracy the CME initial speed, its speed at 1 AU and its average transit speed as well as its size and direction of propagation. We are able to compare the results of the fitting method with the values from the simulation for different viewing angles between the CME direction of propagation and the Sun-spacecraft line. We focus on one simulation of a wide (120-140°) CME, whose initial speed is about 800 km s -1. For this case, we find that the best-fit speed is in good agreement with the speed of the CME at 1 AU, and this, independently of the viewing angle. The fitted direction of propagation is not in good agreement with the viewing angle in the simulation, although smaller viewing angles result in smaller fitted directions. This is due to the extremely wide nature of the ejection. A new fitting method, proposed to take into account the CME width, results in better agreement between fitted and actual directions for directions close to the Sun-Earth line. For other directions, it gives results comparable to the fitting method of Sheeley et al. (1999). The CME deceleration has only a small effect on the fitted direction, resulting in fitted values about 1-4° higher than the actual values.
Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use
ERIC Educational Resources Information Center
Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.
2010-01-01
Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…
Saturn's North Polar Hexagon Numerical Modeling Results
NASA Astrophysics Data System (ADS)
Morales-Juberias, R.; Sayanagi, K. M.; Dowling, T. E.
2008-12-01
In 1980, Voyager images revealed the presence of a circumpolar wave at 78 degrees planetographic latitude in the northern hemisphere of Saturn. It was notable for having a dominant planetary wavenumber-six zonal mode, and for being stationary with respect to Saturn's Kilometric Radiation rotation rate measured by Voyager. The center of this hexagonal feature was coincident with the center of a sharp eastward jet with a peak speed of 100 ms-1 and it had a meridional width of about 4 degrees. This hexagonal feature was confirmed in 1991 through ground-based observations, and it was observed again in 2006 with the Cassini VIMS instrument. The latest observations highlight the longevity of the hexagon and suggest that it extends at least several bars deep into the atmosphere. We use the Explicit Planetary Isentropic Code (EPIC) to perform high-resolution numerical simulations of this unique feature. We show that a wavenumber six instability mode arises naturally from initially barotropic jets when seeded with weak random turbulence. We also discuss the properties of the wave activity on the background vertical stability, zonal wind, planetary rotation rate and adjacent vortices. Computational resources were provided by the New Mexico Computing Applications Center and New Mexico Institute of Mining and Technology and the Comparative Planetology Laboratory at the University of Louisville.
NASA Technical Reports Server (NTRS)
Baker, A. J.; Soliman, M. O.
1978-01-01
A study of accuracy and convergence of linear functional finite element solution to linear parabolic and hyperbolic partial differential equations is presented. A variable-implicit integration procedure is employed for the resultant system of ordinary differential equations. Accuracy and convergence is compared for the consistent and two lumped assembly procedures for the identified initial-value matrix structure. Truncation error estimation is accomplished using Richardson extrapolation.
On the use of Numerical Weather Models for improving SAR geolocation accuracy
NASA Astrophysics Data System (ADS)
Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.
2013-12-01
Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of
NASA Technical Reports Server (NTRS)
Olstad, W. B.
1979-01-01
A class of explicit numerical formulas which involve next nearest neighbor as well as nearest neighbor points are explored in this paper. These formulas are formal approximations to the linear parabolic partial-differential equation of first order in time and second order in distance. It was found that some of these formulas can employ time steps as much as four times that for the conventional explicit technique without becoming unstable. Others showed improved accuracy for a given time step and spatial grid spacing. One formula achieved a steady-state solution of specified accuracy for an example problem in less than 4 percent of the total computational time required by the conventional explicit technique.
NASA Astrophysics Data System (ADS)
Cannon, Kipp; Emberson, J. D.; Hanna, Chad; Keppel, Drew; Pfeiffer, Harald P.
2013-02-01
Matched filtering for the identification of compact object mergers in gravitational wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms. Typically the template bank is constructed from phenomenological waveform models, since these can be evaluated for an arbitrary choice of physical parameters. Recently it has been proposed that singular value decomposition (SVD) can be used to reduce the number of templates required for detection. As we show here, another benefit of SVD is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity (NR) simulations. Using these ideas, we present a method that calibrates a reduced SVD basis of phenomenological waveforms against NR waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model. The new waveform family is given numerically through the interpolation of the projection coefficients of NR waveforms expanded onto the reduced basis and provides a generalized scheme for enhancing phenomenological models.
On the accuracy of numerical integration over the unit sphere applied to full network models
NASA Astrophysics Data System (ADS)
Itskov, Mikhail
2016-05-01
This paper is motivated by a recent study by Verron (Mecha Mater 89:216-228, 2015) which revealed huge errors of the numerical integration over the unit sphere in application to large strain problems. For the verification of numerical integration schemes we apply here other analytical integrals over the unit sphere which demonstrate much more accurate results. Relative errors of these integrals with respect to corresponding analytical solutions are evaluated also for a full network model of rubber elasticity based on a Padé approximation of the inverse Langevin function as the chain force. According to the results of our study, the numerical integration over the unit sphere can still be considered as a reliable and accurate tool for full network models.
Propagation of MHD disturbance in numerical modelling: Accuracy issues and condition
NASA Astrophysics Data System (ADS)
Kim, Kyung-Im; Lee, Dong-Hun; Jang, Jae-Jin; Kim, Jung-Hoon; Kim, Jaehun
2016-07-01
In space weather studies, MHD numerical models are often used to study time-dependent simulations over relatively long time period and large size space, which include many examples from the solar origin to the Earth impact in the heliosphere. There have been rising questions on whether many different numerical codes are consistent with each other and how we can confirm the validity of simulation results for a given event. In this study, we firstly introduce a class of exact analytic solutions of MHD when the boundary is driven by certain impulsive impacts. Secondly we test and compare MHD numerical models with the exact full MHD solution above to check whether the simulations are sufficiently accurate. Our results show 1) that numerical errors are very significant in the problems of MHD disturbance propagation in the interplanetary space, 2) that typical spatial and temporal resolutions, which are widely used in numerical modelling, are found to easily produce more than a few hours up to 10 hours in arrival timing at the near-Earth space, and 3) how we can avoid serious errors by optimizing the model parameters in advance via studying with an exact solution.
Accuracy of endodontic microleakage results: autoradiographic vs. volumetric measurements.
Ximénez-Fyvie, L A; Ximénez-García, C; Carter-Bartlett, P M; Collado-Webber, F J
1996-06-01
The correlation between autoradiographic and volumetric leakage measurements was evaluated. Seventy-two anterior teeth with a single canal were selected and divided into three groups of 24. Group 1 served as control (no obturation), group 2 was obturated with gutta-percha only, and group 3 was obturated with gutta-percha and endodontic sealer. Samples were placed in a vertical position in 48-well cell culture plates and immersed in 1 ml of [14C]urea for 14 days. One-mm-thick horizontal serial sections were cut with a diamond disk cooled with liquid-nitrogen gas. Linear penetration was recorded by five independent evaluators from autoradiographs. Volumetric results were based on counts per minute registered in a liquid scintillation spectrometer. Pearson's correlation coefficient test was used to determine the lineal correlation between both methods of evaluation. No acceptable correlation values were found in any of the three groups (group 1, r = 0.34; group 2, r = 0.23; group 3, r = 0.20). Our results indicate that there is no correlation between linear and volumetric measurements of leakage. PMID:8934988
Improved Accuracy of the Gravity Probe B Science Results
NASA Astrophysics Data System (ADS)
Conklin, John; Adams, M.; Aljadaan, A.; Aljibreen, H.; Almeshari, M.; Alsuwaidan, B.; Bencze, W.; Buchman, S.; Clarke, B.; Debra, D. B.; Everitt, C. W. F.; Heifetz, M.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Worden, P. W., Jr.
This paper presents the progress in the science data analysis for the Gravity Probe B (GP-B) experiment. GP-B, sponsored by NASA and launched in April of 2004, tests two fundamental predictions of general relativity, the geodetic effect and the frame-dragging effect. The GP-B spacecraft measures the non-Newtonian drift rates of four ultra-precise cryogenic gyroscopes placed in a circular polar Low Earth Orbit. Science data was collected from 28 August 2004 until cryogen depletion on 29 September 2005. The data analysis is complicated by two unexpected phenomena, a) a continually damping gyroscope polhode affecting the calibration of the gyro readout scale factor, and b) two larger than expected classes of Newtonian torque acting on the gyroscopes. Experimental evidence strongly suggests that both effects are caused by non-uniform electric potentials (i.e. the patch effect) on the surfaces of the gyroscope rotor and its housing. At the end of 2008, the data analysis team reported intermediate results showing that the two complications are well understood and are separable from the relativity signal. Since then we have developed the final GP-B data analysis code, the "2-second Filter", which provides the most accurate and precise determination of the non-Newtonian drifts attainable in the presence of the two Newtonian torques and the fundamental instrument noise. This limit is roughly 5
NASA Astrophysics Data System (ADS)
Hwang, Cheinway; Hsiao, Yu-Shen; Shih, Hsuan-Chang; Yang, Ming; Chen, Kwo-Hwa; Forsberg, Rene; Olesen, Arne V.
2007-04-01
An airborne gravity survey was conducted over Taiwan using a LaCoste and Romberg (LCR) System II air-sea gravimeter with gravity and global positioning system (GPS) data sampled at 1 Hz. The aircraft trajectories were determined using a GPS network kinematic adjustment relative to eight GPS tracking stations. Long-wavelength errors in position are reduced when doing numerical differentiations for velocity and acceleration. A procedure for computing resolvable wavelength of error-free airborne gravimetry is derived. The accuracy requirements of position, velocity, and accelerations for a 1-mgal accuracy in gravity anomaly are derived. GPS will fulfill these requirements except for vertical acceleration. An iterative Gaussian filter is used to reduce errors in vertical acceleration. A compromising filter width for noise reduction and gravity detail is 150 s. The airborne gravity anomalies are compared with surface values, and large differences are found over high mountains where the gravity field is rough and surface data density is low. The root mean square (RMS) crossover differences before and after a bias-only adjustment are 4.92 and 2.88 mgal, the latter corresponding to a 2-mgal standard error in gravity anomaly. Repeatability analyses at two survey lines suggest that GPS is the dominating factor affecting the repeatability. Fourier transform and least-squares collocation are used for downward continuation, and the latter produces a better result. Two geoid models are computed, one using airborne and surface gravity data and the other using surface data only, and the former yields a better agreement with the GPS-derived geoidal heights. Bouguer anomalies derived from airborne gravity by a rigorous numerical integration reveal important tectonic features.
Numerical results for the WFNDEC 2012 eddy current benchmark problem
NASA Astrophysics Data System (ADS)
Theodoulidis, T. P.; Martinos, J.; Poulakis, N.
2013-01-01
We present numerical results for the World Federation of NDE Centers (WFNDEC) 2012 eddy current benchmark problem obtained with a commercial FEM package (Comsol Multiphysics). The measurements of the benchmark problem consist of coil impedance values acquired when an inspection probe coil is moved inside an Inconel tube along an axial through-wall notch. The simulation runs smoothly with minimal user interference (default settings used for mesh and solver) and agreement between numerical and experimental results is excellent for all five inspection frequencies. Comments are made for the pros and cons of FEM and also some good practice rules are presented when using such numerical tools.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.
2013-02-01
In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.
Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results
Di Persio, Luca; Marchesan, Michele
2014-01-01
In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest. PMID:27437500
NASA Technical Reports Server (NTRS)
Ahmad, Jasim; Aiken, Edwin, W. (Technical Monitor)
1998-01-01
Helicopter flowfields are highly unsteady, nonlinear and three-dimensional. In forward flight and in hover, the rotor blades interact with the tip vortex and wake sheet developed by either itself or the other blades. This interaction, known as blade-vortex interactions (BVI), results in unsteady loading of the blades and can cause a distinctive acoustic signature. Accurate and cost-effective computational fluid dynamic solutions that capture blade-vortex interactions can help rotor designers and engineers to predict rotor performance and to develop designs for low acoustic signature. Such a predictive method must preserve a blade's shed vortex for several blade revolutions before being dissipated. A number of researchers have explored the requirements for this task. This paper will outline some new capabilities that have been added to the NASA Ames' OVERFLOW code to improve its overall accuracy for both vortex capturing and unsteady flows. To highlight these improvements, a number of case studies will be presented. These case studies consist of free convection of a 2-dimensional vortex, dynamically pitching 2-D airfoil including light-stall, and a full 3-D unsteady viscous solution of a helicopter rotor in forward flight In this study both central and upwind difference schemes are modified to be more accurate. Central difference scheme is chosen for this simulation because the flowfield is not dominated by strong shocks. The feature of shock-vortex interaction in such a flow is less important than the dominant blade-vortex interaction. The scheme is second-order accurate in time and solves the thin-layer Navier-Stokes equations in fully-implicit manner at each time-step. The spatial accuracy is either second and fourth-order central difference or third-order upwind difference using Roe-flux and MUSCLE scheme. This paper will highlight and demonstrate the methods for several sample cases and for a helicopter rotor. Preliminary computations on a rotor were performed
Hill, M.C.
1989-01-01
Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author
Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
Some theoretical and numerical results for delayed neural field equations
NASA Astrophysics Data System (ADS)
Faye, Grégory; Faugeras, Olivier
2010-05-01
In this paper we study neural field models with delays which define a useful framework for modeling macroscopic parts of the cortex involving several populations of neurons. Nonlinear delayed integro-differential equations describe the spatio-temporal behavior of these fields. Using methods from the theory of delay differential equations, we show the existence and uniqueness of a solution of these equations. A Lyapunov analysis gives us sufficient conditions for the solutions to be asymptotically stable. We also present a fairly detailed study of the numerical computation of these solutions. This is, to our knowledge, the first time that a serious analysis of the problem of the existence and uniqueness of a solution of these equations has been performed. Another original contribution of ours is the definition of a Lyapunov functional and the result of stability it implies. We illustrate our numerical schemes on a variety of examples that are relevant to modeling in neuroscience.
NASA Technical Reports Server (NTRS)
Radhadrishnan, Krishnan
1993-01-01
A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.
Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms
NASA Astrophysics Data System (ADS)
Gorobets, A. V.
2015-04-01
A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.
Integrating Numerical Groundwater Modeling Results With Geographic Information Systems
NASA Astrophysics Data System (ADS)
Witkowski, M. S.; Robinson, B. A.; Linger, S. P.
2001-12-01
Many different types of data are used to create numerical models of flow and transport of groundwater in the vadose zone. Results from water balance studies, infiltration models, hydrologic properties, and digital elevation models (DEMs) are examples of such data. Because input data comes in a variety of formats, for consistency the data need to be assembled in a coherent fashion on a single platform. Through the use of a geographic information system (GIS), all data sources can effectively be integrated on one platform to store, retrieve, query, and display data. In our vadoze zone modeling studies in support of Los Alamos National Laboratory's Environmental Restoration Project, we employ a GIS comprised of a Raid storage device, an Oracle database, ESRI's spatial database engine (SDE), ArcView GIS, and custom GIS tools for three-dimensional (3D) analysis. We store traditional GIS data, such as, contours, historical building footprints, and study area locations, as points, lines, and polygons with attributes. Numerical flow and transport model results from the Finite Element Heat and Mass Transfer Code (FEHM) are stored as points with attributes, such as fluid saturation, or pressure, or contaminant concentration at a given location. We overlay traditional types of GIS data with numerical model results, thereby allowing us to better build conceptual models and perform spatial analyses. We have also developed specialized analysis tools to assist in the data and model analysis process. This approach provides an integrated framework for performing tasks such as comparing the model to data and understanding the relationship of model predictions to existing contaminant source locations and water supply wells. Our process of integrating GIS and numerical modeling results allows us to answer a wide variety of questions about our conceptual model design: - Which set of locations should be identified as contaminant sources based on known historical building operations
Path Integrals and Exotic Options:. Methods and Numerical Results
NASA Astrophysics Data System (ADS)
Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.
2005-09-01
In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1–3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ∼ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
Evaluating the velocity accuracy of an integrated GPS/INS system: Flight test results
Owen, T.E.; Wardlaw, R.
1991-12-31
Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.
Slump Flows inside Pipes: Numerical Results and Comparison with Experiments
NASA Astrophysics Data System (ADS)
Malekmohammadi, S.; Naccache, M. F.; Frigaard, I. A.; Martinez, D. M.
2008-07-01
In this work an analysis of the buoyancy-driven slumping flow inside a pipe is presented. This flow usually occurs when an oil well is sealed by a plug cementing process, where a cement plug is placed inside the pipe filled with a lower density fluid, displacing it towards the upper cylinder wall. Both the cement and the surrounding fluids have a non Newtonian behavior. The cement is viscoplastic and the surrounding fluid presents a shear thinning behavior. A numerical analysis was performed to evaluate the effects of some governing parameters on the slump length development. The conservation equations of mass and momentum were solved via a finite volume technique, using Fluent software (Ansys Inc.). The Volume of Fluid surface-tracking method was used to obtain the interface between the fluids and the slump length as a function of time. The results were obtained for different values of fluids densities differences, fluids rheology and pipe inclinations. The effects of these parameters on the interface shape and on the slump length versus time curve were analyzed. Moreover, the numerical results were compared to experimental ones, but some differences are observed, possibly due to chemical effects at the interface.
Numerical accuracy of linear triangular finite elements in modeling multi-holed structures
Sullivan, R.M.; Griffen, J.E.
1980-06-01
A study has been performed to quantify the accuracy of linear triangular finite elements for modeling temperature and stress fields in structures with multiple holes. The purpose of the study was to evaluate the use of these elements for the analysis of HTGR fuel blocks, which may contain up to 325 holes. Since an accurate full scale analysis was not feasible with existing methods, a representative small scale benchmark problem containing only seven holes was selected. The finite element codes used in this study were TEPC-2D for thermal analysis and SAFIRE for stress analysis. It was concluded that linear triangular finite elements are too inefficient for this application. An accurate analysis of stresses in HTGR fuel blocks will require the use of higher order elements, such as the 8-node quadrilaterals in the new TWOD code.
Synthetic jet parameter identification and numerical results validation
NASA Astrophysics Data System (ADS)
Sabbatini, Danilo; Rimasauskiene, Ruta; Matejka, Milan; Kurowski, Marcin; Wandowski, Tomasz; Malinowski, Paweł; Doerffer, Piotr
2012-06-01
The design of a synthetic jet requires a careful identification of the components' parameters, in order to be able to perform accurate numerical simulations, this identification must be done by mean of a series of measurements that, due to the small dimensions of the components, are required to be non-contact techniques. The activities described in this paper have been performed in the frame of the STA-DY-WI-CO project, whose purpose is the design of a synthetic jet and demonstrate its effectiveness and efficiency for a real application. To measure the energy saving, due to the synthetic jet effects on the separation, the increased performances of the profile must be compared to the energy absorbed by the actuator and the weight of the system. In design phase a series of actuators has being considered as well as a series of cavity layout, in order to obtain the most effective, efficient and durable package. The modal characteristics of piezoelectric component was assessed by means of tests performed with a 3D scanning laser vibrometer, measuring the frequency response to voltage excitation. Analyzed the effects of the parameters, and chosen components and layout, the system can be dimensioned by means of numeric simulations. The outcome of the simulation is the effect of the synthetic jet, in an assumed flow, for the selected profile. The numerical results on the field of the separated flow with recirculating area were validated by means of tests performed in an Eiffel type wind tunnel. The last test performed on the synthetic jet aims to understand the acoustic impact, noise measurements were performed to have full analysis and synthesis.
Goldberg, K.A. |; Tejnil, E.; Bokor, J. |
1995-12-01
A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.
Accuracy and stability of positioning in radiosurgery: long-term results of the Gamma Knife system.
Heck, Bernhard; Jess-Hempen, Anja; Kreiner, Hans Jürg; Schöpgens, Hans; Mack, Andreas
2007-04-01
The primary aim of this investigation was to determine the long term overall accuracy of an irradiation position of Gamma Knife systems. The mechanical accuracy of the system as well as the overall accuracy of an irradiation position was examined by irradiating radiosensitive films. To measure the mechanical accuracy, the GafChromic film was fixed by a special tool at the unit center point (UCP). For overall accuracy the film was mounted inside a phantom at a target position given by a two-dimensional cross. Its position was determined by CT or MRI scans, a treatment was planned to hit this target by use of the standard planning software and the radiation was finally delivered. This procedure is named "system test" according to DIN 6875-1 and is equivalent to a treatment simulation. The used GafChromic films were evaluated by high resolution densitometric measurements. The Munich Gamma Knife UCP coincided within x; y; z: -0.014 +/- 0.09 mm; 0.013 +/- 0.09 mm; -0.002 +/- 0.06 mm (mean +/- SD) to the center of dose distribution. There was no trend in the measured data observed over more than ten years. All measured data were within a sphere of 0.2 mm radius. When basing the target definition in the system test on MRI scans, we obtained an overall accuracy of an irradiation position in the x direction of 0.21 +/- 0.32 mm and in the y direction 0.15 +/- 0.26 mm (mean +/- SD). When a CT-based target definition was used, we measured distances in x direction 0.06 +/- 0.09 mm and in y direction 0.04 +/- 0.09 mm (mean +/- SD), respectively. These results were compared with those obtained with a Gamma Knife equipped with an automatic positioning system (APS) by use of a different phantom. This phantom was found to be slightly less accurate due to its mechanical construction and the soft fixation into the frame. The phantom related position deviation was found to be about +/- 0.2 mm, and therefore the measured accuracy of the APS Gamma Knife was evidently less precise by
Cullum, J.
1994-12-31
Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.
The effect of accuracy, conservation and filtering on numerical weather forecasting
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.; Hoitsma, D.
1979-01-01
Considerations leading to the numerical design of the GLAS fourth-order global atmospheric model are discussed, including changes recently introduced into the model. The computation time and memory requirements for the fourth-order model are similar to those of the present second-order GLAS model with the same 4 deg latitude, 5 deg longitude, and 9 vertical-level resolution. However, the fourth-order model forecast skill is significantly better than that of the current GLAS model, and after three days it is comparable to the 2.5 by 3 deg version of the GLAS model in the sea level pressure maps, and has less phase errors in the 500 mb maps.
NASA Astrophysics Data System (ADS)
Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.
2016-08-01
This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.
Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments
NASA Astrophysics Data System (ADS)
Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang
2016-06-01
Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.
Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2014-12-01
Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.
Interaction between subducting plates: results from numerical and analogue modeling
NASA Astrophysics Data System (ADS)
Kiraly, Agnes; Capitanio, Fabio A.; Funiciello, Francesca; Faccenna, Claudio
2016-04-01
The tectonic setting of the Alpine-Mediterranean area is achieved during the late Cenozoic subduction, collision and suturing of several oceanic fragments and continental blocks. In this stage, processes such as interactions among subducting slabs, slab migrations and related mantle flow played a relevant role on the resulting tectonics. Here, we use numerical models to first address the mantle flow characteristic in 3D. During the subduction of a single plate the strength of the return flow strongly depends on the slab pull force, that is on the plate's buoyancy, however the physical properties of the slab, such as density, viscosity or width, do not affect largely the morphology of the toroidal cell. Instead, dramatic effects on the geometry and the dynamics of the toroidal cell result in models where the thickness of the mantle is varied. The vertical component of the vorticity vector is used to define the characteristic size of the toroidal cell, which is ~1.2-1.3 times the mantle depth. This latter defines the range of viscous stress propagation through the mantle and consequent interactions with other slabs. We thus further investigate on this setup where two separate lithospheric plates subduct in opposite sense, developing opposite polarities and convergent slab retreat, and model different initial sideways distance between the plates. The stress profiles in time illustrate that the plates interacts when slabs are at the characteristic distance and the two slabs toroidal cells merge. Increased stress and delayed slab migrations are the results. Analogue models of double-sided subduction show similar maximum distance and allow testing the additional role of stress propagated through the plates. We use a silicon plate subducting on its two opposite margins, which is either homogeneous or comprises oceanic and continental lithospheres, differing in buoyancy. The modeling results show that the double-sided subduction is strongly affected by changes in plate
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major
Numerical Results of 3-D Modeling of Moon Accumulation
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr
2014-05-01
For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity
Improving the trust in results of numerical simulations and scientific data analytics
Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan
2015-04-30
This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general
Busted Butte: Achieving the Objectives and Numerical Modeling Results
W.E. Soll; M. Kearney; P. Stauffer; P. Tseng; H.J. Turin; Z. Lu
2002-10-07
The Unsaturated Zone Transport Test (UZTT) at Busted Butte is a mesoscale field/laboratory/modeling investigation designed to address uncertainties associated with flow and transport in the UZ site-process models for Yucca Mountain. The UZTT test facility is located approximately 8 km southeast of the potential Yucca Mountain repository area. The UZTT was designed in two phases, to address five specific objectives in the UZ: the effect of heterogeneities, flow and transport (F&T) behavior at permeability contrast boundaries, migration of colloids , transport models of sorbing tracers, and scaling issues in moving from laboratory scale to field scale. Phase 1A was designed to assess the influence of permeability contrast boundaries in the hydrologic Calico Hills. Visualization of fluorescein movement , mineback rock analyses, and comparison with numerical models demonstrated that F&T are capillary dominated with permeability contrast boundaries distorting the capillary flow. Phase 1B was designed to assess the influence of fractures on F&T and colloid movement. The injector in Phase 1B was located at a fracture, while the collector, 30 cm below, was placed at what was assumed to be the same fracture. Numerical simulations of nonreactive (Br) and reactive (Li) tracers show the experimental data are best explained by a combination of molecular diffusion and advective flux. For Phase 2, a numerical model with homogeneous unit descriptions was able to qualitatively capture the general characteristics of the system. Numerical simulations and field observations revealed a capillary dominated flow field. Although the tracers showed heterogeneity in the test block, simulation using heterogeneous fields did not significantly improve the data fit over homogeneous field simulations. In terms of scaling, simulations of field tracer data indicate a hydraulic conductivity two orders of magnitude higher than measured in the laboratory. Simulations of Li, a weakly sorbing tracer
NASA Astrophysics Data System (ADS)
Wang, Shi-tai; Peng, Jun-huan
2015-12-01
The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.
Numerical Results of Earth's Core Accumulation 3-D Modelling
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod
2013-04-01
For a long time as a most convenient had been the model of mega impact in which the early forming of the Earth's core and mantle had been the consequence of formed protoplanet collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,3] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone. Only after the increasing of the gravitational radius, the growing area of the future core can save also the silicate envelope fragments. All existing dynamical accumulation models are constructed by using a spherical-symmetrical model. Hence for understanding the further planet evolution it is significant to trace the origin and evolution of heterogeneities, which occur on the planet accumulation stage. In that paper we are modeling distributions of temperature, pressure, velocity of matter flowing in a block of 3D- spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach. The numerical algorithm of the problem solution in
Numerical calculations of high-altitude differential charging: Preliminary results
NASA Technical Reports Server (NTRS)
Laframboise, J. G.; Godard, R.; Prokopenko, S. M. L.
1979-01-01
A two dimensional simulation program was constructed in order to obtain theoretical predictions of floating potential distributions on geostationary spacecraft. The geometry was infinite-cylindrical with angle dependence. Effects of finite spacecraft length on sheath potential profiles can be included in an approximate way. The program can treat either steady-state conditions or slowly time-varying situations, involving external time scales much larger than particle transit times. Approximate, locally dependent expressions were used to provide space charge, density profiles, but numerical orbit-following is used to calculate surface currents. Ambient velocity distributions were assumed to be isotropic, beam-like, or some superposition of these.
Numerical computation of the effective-one-body potential q using self-force results
NASA Astrophysics Data System (ADS)
Akcay, Sarp; van de Meent, Maarten
2016-03-01
The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two nonspinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: a (v ) , d ¯ (v ) , q (v ) . By generalizing the first law of mechanics for (nonspinning) black hole binaries to eccentric orbits, [A. Le Tiec, Phys. Rev. D 92, 084021 (2015).] recently obtained new expressions for d ¯(v ) and q (v ) in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential q (v ) by combining results from two independent numerical self-force codes. We determine q (v ) for inverse binary separations in the range 1 /1200 ≤v ≲1 /6 . Our computation thus provides the first-ever strong-field results for q (v ) . We also obtain d ¯ (v ) in our entire domain to a fractional accuracy of ≳10-8 . We find that our results are compatible with the known post-Newtonian expansions for d ¯(v ) and q (v ) in the weak field, and agree with previous (less accurate) numerical results for d ¯(v ) in the strong field.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-06-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-08-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
Spurious frequencies as a result of numerical boundary treatments
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David
1990-01-01
The stability theory for finite difference Initial Boundary-Value approximations to systems of hyperbolic partial differential equations states that the exclusion of eigenvalues and generalized eigenvalues is a sufficient condition for stability. The theory, however, does not discuss the nature of numerical approximations in the presence of such eigenvalues. In fact, as was shown previously, for the problem of vortex shedding by a 2-D cylinder in subsonic flow, stating boundary conditions in terms of the primitive (non-characteristic) variables may lead to such eigenvalues, causing perturbations that decay slowly in space and remain periodic time. Characteristic formulation of the boundary conditions avoided this problem. A more systematic study of the behavior of the (linearized) one-dimensional gas dynamic equations under various sets of oscillation-inducing legal boundary conditions is reported.
Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J
2015-06-15
Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. PMID:25800943
Equations of state of freely jointed hard-sphere chain fluids: Numerical results
Stell, G.; Lin, C.; Kalyuzhnyi, Y.V.
1999-03-01
We continue our series of studies in which the equations of state (EOS) are derived based on the product-reactant Ornstein{endash}Zernike approach (PROZA) and first-order thermodynamic perturbation theory (TPT1). These include two compressibility EOS, two virial EOS, and one TPT1 EOS (TPT1-D) that uses the structural information of the dimer fluid as input. In this study, we carry out the numerical implementation for these five EOS and compare their numerical results as well as those obtained from Attard{close_quote}s EOS and GF-D (generalized Flory-dimer) EOS with computer simulation results for the corresponding chain models over a wide range of densities and chain length. The comparison shows that our compressibility EOS, GF-D, and TPT1-D are in quantitative agreement with simulation results, and TPT1-D is the best among various EOS according to its average absolute deviation (AAD). On the basis of a comparison of limited data, our virial EOS appears to be superior to the predictions of Attard{close_quote}s approximate virial EOS and the approximate virial EOS derived by Schweizer and Curro in the context of the PRISM approach; all of them are only qualitatively accurate. The degree of accuracy predicted by our compressibility EOS is comparable to that of GF-D EOS, and both of them overestimate the compressibility factor at low densities and underestimate it at high densities. The compressibility factor of a polydisperse homonuclear chain system is also investigated in this work via our compressibility EOS; the numerical results are identical to those of a monodisperse system with the same chain length. {copyright} {ital 1999 American Institute of Physics.}
Accuracy of relative positioning by interferometry with GPS Double-blind test results
NASA Technical Reports Server (NTRS)
Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.
1983-01-01
MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej
2010-03-09
the theoretical and measurement results for all cases considered has verified the validity and accuracy of our numerical model. Quantitative analysis of the obtained results enabled to find how the ultrasound-induced temperature rises in the rat liver could be controlled by adjusting the source parameters and exposure time.
Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.
Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa
2015-09-01
Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. PMID:26069219
Oussalah, Abderrahim; Ferrand, Janina; Filhine-Tresarrieu, Pierre; Aissa, Nejla; Aimone-Gastin, Isabelle; Namour, Fares; Garcia, Matthieu; Lozniewski, Alain; Guéant, Jean-Louis
2015-01-01
Abstract Previous studies have suggested that procalcitonin is a reliable marker for predicting bacteremia. However, these studies have had relatively small sample sizes or focused on a single clinical entity. The primary endpoint of this study was to investigate the diagnostic accuracy of procalcitonin for predicting or excluding clinically relevant pathogen categories in patients with suspected bloodstream infections. The secondary endpoint was to look for organisms significantly associated with internationally validated procalcitonin intervals. We performed a cross-sectional study that included 35,343 consecutive patients who underwent concomitant procalcitonin assays and blood cultures for suspected bloodstream infections. Biochemical and microbiological data were systematically collected in an electronic database and extracted for purposes of this study. Depending on blood culture results, patients were classified into 1 of the 5 following groups: negative blood culture, Gram-positive bacteremia, Gram-negative bacteremia, fungi, and potential contaminants found in blood cultures (PCBCs). The highest procalcitonin concentration was observed in patients with blood cultures growing Gram-negative bacteria (median 2.2 ng/mL [IQR 0.6–12.2]), and the lowest procalcitonin concentration was observed in patients with negative blood cultures (median 0.3 ng/mL [IQR 0.1–1.1]). With optimal thresholds ranging from ≤0.4 to ≤0.75 ng/mL, procalcitonin had a high diagnostic accuracy for excluding all pathogen categories with the following negative predictive values: Gram-negative bacteria (98.9%) (including enterobacteria [99.2%], nonfermenting Gram-negative bacilli [99.7%], and anaerobic bacteria [99.9%]), Gram-positive bacteria (98.4%), and fungi (99.6%). A procalcitonin concentration ≥10 ng/mL was associated with a high risk of Gram-negative (odds ratio 5.98; 95% CI, 5.20–6.88) or Gram-positive (odds ratio 3.64; 95% CI, 3.11–4.26) bacteremia but
Mapping soil texture classes and optimization of the result by accuracy assessment
NASA Astrophysics Data System (ADS)
Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László
2014-05-01
There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the
Selle, L.; Ferret, B.; Poinsot, T.
2011-01-15
Measuring the velocities of premixed laminar flames with precision remains a controversial issue in the combustion community. This paper studies the accuracy of such measurements in two-dimensional slot burners and shows that while methane/air flame speeds can be measured with reasonable accuracy, the method may lack precision for other mixtures such as hydrogen/air. Curvature at the flame tip, strain on the flame sides and local quenching at the flame base can modify local flame speeds and require corrections which are studied using two-dimensional DNS. Numerical simulations also provide stretch, displacement and consumption flame speeds along the flame front. For methane/air flames, DNS show that the local stretch remains small so that the local consumption speed is very close to the unstretched premixed flame speed. The only correction needed to correctly predict flame speeds in this case is due to the finite aspect ratio of the slot used to inject the premixed gases which induces a flow acceleration in the measurement region (this correction can be evaluated from velocity measurement in the slot section or from an analytical solution). The method is applied to methane/air flames with and without water addition and results are compared to experimental data found in the literature. The paper then discusses the limitations of the slot-burner method to measure flame speeds for other mixtures and shows that it is not well adapted to mixtures with a Lewis number far from unity, such as hydrogen/air flames. (author)
Castro, A. P. G.; Paul, C. P. L.; Detiger, S. E. L.; Smit, T. H.; van Royen, B. J.; Pimenta Claro, J. C.; Mullender, M. G.; Alves, J. L.
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
Castro, A P G; Paul, C P L; Detiger, S E L; Smit, T H; van Royen, B J; Pimenta Claro, J C; Mullender, M G; Alves, J L
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
NASA Astrophysics Data System (ADS)
Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick
2016-06-01
We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.
Sediment Pathways Across Trench Slopes: Results From Numerical Modeling
NASA Astrophysics Data System (ADS)
Cormier, M. H.; Seeber, L.; McHugh, C. M.; Fujiwara, T.; Kanamatsu, T.; King, J. W.
2015-12-01
Until the 2011 Mw9.0 Tohoku earthquake, the role of earthquakes as agents of sediment dispersal and deposition at erosional trenches was largely under-appreciated. A series of cruises carried out after the 2011 event has revealed a variety of unsuspected sediment transport mechanisms, such as tsunami-triggered sheet turbidites, suggesting that great earthquakes may in fact be important agents for dispersing sediments across trench slopes. To complement these observational data, we have modeled the pathways of sediments across the trench slope based on bathymetric grids. Our approach assumes that transport direction is controlled by slope azimuth only, and ignores obstacles smaller than 0.6-1 km; these constraints are meant to approximate the behavior of turbidites. Results indicate that (1) most pathways issued from the upper slope terminate near the top of the small frontal wedge, and thus do not reach the trench axis; (2) in turn, sediments transported to the trench axis are likely derived from the small frontal wedge or from the subducting Pacific plate. These results are consistent with the stratigraphy imaged in seismic profiles, which reveals that the slope apron does not extend as far as the frontal wedge, and that the thickness of sediments at the trench axis is similar to that of the incoming Pacific plate. We further applied this modeling technique to the Cascadia, Nankai, Middle-America, and Sumatra trenches. Where well-defined canyons carve the trench slopes, sediments from the upper slope may routinely reach the trench axis (e.g., off Costa Rica and Cascadia). In turn, slope basins that are isolated from the canyons drainage systems must mainly accumulate locally-derived sediments. Therefore, their turbiditic infill may be diagnostic of seismic activity only - and not from storm or flood activity. If correct, this would make isolated slope basins ideal targets for paleoseismological investigation.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. PMID:27352918
Speed and Accuracy of Absolute Pitch Judgments: Some Latter-Day Results.
ERIC Educational Resources Information Center
Carroll, John B.
Nine subjects, 5 of whom claimed absolute pitch (AP) ability were instructed to rapidly strike notes on the piano to match randomized tape-recorded piano notes. Stimulus set sizes were 64, 16, or 4 consecutive semitones, or 7 diatonic notes of a designated octave. A control task involved motor movements to notes announced in advance. Accuracy,…
Stacey, Peter; Revell, Graham; Tylee, Barry
2002-11-01
Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced
NASA Astrophysics Data System (ADS)
Motheau, E.; Abraham, J.
2016-05-01
A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.
Numerical prediction of freezing fronts in cryosurgery: comparison with experimental results.
Fortin, André; Belhamadia, Youssef
2005-08-01
Recent developments in scientific computing now allow to consider realistic applications of numerical modelling to medicine. In this work, a numerical method is presented for the simulation of phase change occurring in cryosurgery applications. The ultimate goal of these simulations is to accurately predict the freezing front position and the thermal history inside the ice ball which is essential to determine if cancerous cells have been completely destroyed. A semi-phase field formulation including blood flow considerations is employed for the simulations. Numerical results are enhanced by the introduction of an anisotropic remeshing strategy. The numerical procedure is validated by comparing the predictions of the model with experimental results. PMID:16298846
NASA Astrophysics Data System (ADS)
Aleksandrova, A. G.; Galushina, T. Yu.
2015-12-01
The paper describes the software package developed for the numerical simulation of the breakups of natural and artificial objects and algorithms on which it is based. A new software "Numerical model of breakups" includes models of collapse of the spacecraft (SC) as a result of the explosion and collision as well as two models of the explosion of an asteroid.
Scholl, M.A.
2000-01-01
Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence
NASA Technical Reports Server (NTRS)
Smutek, C.; Bontoux, P.; Roux, B.; Schiroky, G. H.; Hurford, A. C.
1985-01-01
The results of a three-dimensional numerical simulation of Boussinesq free convection in a horizontal differentially heated cylinder are presented. The computation was based on a Samarskii-Andreyev scheme (described by Leong, 1981) and a false-transient advancement in time, with vorticity, velocity, and temperature as dependent variables. Solutions for velocity and temperature distributions were obtained for Rayleigh numbers (based on the radius) Ra = 74-18,700, thus covering the core- and boundary-layer-driven regimes. Numerical solutions are compared with asymptotic analytical solutions and experimental data. The numerical results well represent the complex three-dimensional flows found experimentally.
Manzini, Gianmarco; Cangiani, Andrea; Sutton, Oliver
2014-10-02
This document presents the results of a set of preliminary numerical experiments using several possible conforming virtual element approximations of the convection-reaction-diffusion equation with variable coefficients.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2014-02-15
Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: S{sub T‖} = −0.152 K{sup −1} and S{sub T⊥} = −0.257 K{sup −1} at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ{sub ‖} and ξ{sub ⊥} used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can
A numerically efficient finite element hydroelastic analysis. Volume 1: Theory and results
NASA Technical Reports Server (NTRS)
Coppolino, R. N.
1976-01-01
Symmetric finite element matrix formulations for compressible and incompressible hydroelasticity are developed on the basis of Toupin's complementary formulation of classical mechanics. Results of implementation of the new technique in the NASTRAN structural analysis program are presented which demonstrate accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Jayaprakash, Arvind; Mahalatkar, Kathikeya
2006-11-01
Standard two-equation turbulence models have been found to be incapable of predicting cavitating flow due to high compressibility in the vapor region. In order to predict the dynamics of vapor cloud shedding, Courtier-Delgosha (J. of Fluid Eng, 125, 2003) suggested a modification for the eddy viscosity for k-epsilon turbulence model. Though the modification works in capturing the dynamic behavior of cavitation sheet, the accuracy of cavity length and frequency is not achieved for a wide range of cavitation numbers. This is due to the complex flow features present during a cavitating flow and the incapability of Couitier-Delgosh's turbulence modification to account for these factors. A tuning factor is introduced in the turbulence modification of Coutier-Delgosha, which can be adjusted for different types of geometries. This modified form is then tuned and tested on prediction of cavitating flow over several geometries including NACA 0015 hydrofoil, Convergent-Divergent Nozzle, and Wedge. Good comparisons for both cavity length and frequency of vapor cloud shedding were obtained for wide range of cavitation numbers in all the geometries. The commercial CFD software Fluent has been used for this analysis. Comparisons of cavity length and vapor cloud shedding frequency as predicted by the present turbulence modification and those observed in experimental studies will be presented.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results
NASA Astrophysics Data System (ADS)
Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris
2016-02-01
In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.
Height of burst explosions: a comparative study of numerical and experimental results
NASA Astrophysics Data System (ADS)
Omang, M.; Christensen, S. O.; Børve, S.; Trulsen, J.
2009-06-01
In the current work, we use the Constant Volume model and the numerical method, Regularized Smoothed Particle Hydrodynamics (RSPH) to study propagation and reflection of blast waves from detonations of the high explosives C-4 and TNT. The results from simulations of free-field TNT explosions are compared to previously published data, and good agreement is found. Measurements from height of burst tests performed by the Norwegian Defence Estates Agency are used to compare against numerical simulations. The results for shock time of arrival and the pressure levels are well represented by the numerical results. The results are also found to be in good agreement with results from a commercially available code. The effect of allowing different ratios of specific heat capacities in the explosive products are studied. We also evaluate the effect of changing the charge shape and height of burst on the triple point trajectory.
Gravity Probe B Data Analysis. Status and Potential for Improved Accuracy of Scientific Results
NASA Astrophysics Data System (ADS)
Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J. W.; Debra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Mester, J. C.; Muhlfelder, B.; Ohshima, Y.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Wang, S.; Worden, P. W.
2009-12-01
This is the first of five connected papers detailing progress on the Gravity Probe B (GP-B) Relativity Mission. GP-B, launched 20 April 2004, is a landmark physics experiment in space to test two fundamental predictions of Einstein’s general relativity theory, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Data collection began 28 August 2004 and science operations were completed 29 September 2005. The data analysis has proven deeper than expected as a result of two mutually reinforcing complications in gyroscope performance: (1) a changing polhode path affecting the calibration of the gyroscope scale factor C g against the aberration of starlight and (2) two larger than expected manifestations of a Newtonian gyro torque due to patch potentials on the rotor and housing. In earlier papers, we reported two methods, ‘geometric’ and ‘algebraic’, for identifying and removing the first Newtonian effect (‘misalignment torque’), and also a preliminary method of treating the second (‘roll-polhode resonance torque’). Central to the progress in both torque modeling and C g determination has been an extended effort on “Trapped Flux Mapping” commenced in November 2006. A turning point came in August 2008 when it became possible to include a detailed history of the resonance torques into the computation. The East-West (frame-dragging) effect is now plainly visible in the processed data. The current statistical uncertainty from an analysis of 155 days of data is 5.4 marc-s/yr (˜14% of the predicted effect), though it must be emphasized that this is a preliminary result requiring rigorous investigation of systematics by methods discussed in the accompanying paper by Muhlfelder et al. A covariance analysis incorporating models of the patch effect torques indicates that a 3-5% determination of frame-dragging is possible with more complete, computationally intensive data analysis.
NASA Astrophysics Data System (ADS)
Decaulne, Armelle
2014-05-01
Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?
NASA Astrophysics Data System (ADS)
Wojcik, J.; Powalowski, T.; Trawinski, Z.
2008-02-01
The aim of this paper is to compare the results of the mathematical modeling and experimental results of the ultrasonic waves scattering in the inhomogeneous dissipative medium. The research was carried out for an artery model (a pipe made of a latex), with internal diameter of 5 mm and wall thickness of 1.25 mm. The numerical solver was created for calculation of the fields of ultrasonic beams and scattered fields under different boundary conditions, different angles and transversal displacement of ultrasonic beams with respect to the position of the arterial wall. The investigations employed the VED ultrasonic apparatus. The good agreement between the numerical calculation and experimental results was obtained.
Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration
NASA Technical Reports Server (NTRS)
Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.
1987-01-01
In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.
Numerical Studies of Magnetohydrodynamic Activity Resulting from Inductive Transients Final Report
Sovinec, Carl R.
2005-08-29
This report describes results from numerical studies of transients in magnetically confined plasmas. The work has been performed by University of Wisconsin graduate students James Reynolds and Giovanni Cone and by the Principal Investigator through support from contract DE-FG02-02ER54687, a Junior Faculty in Plasma Science award from the DOE Office of Science. Results from the computations have added significantly to our knowledge of magnetized plasma relaxation in the reversed-field pinch (RFP) and spheromak. In particular, they have distinguished relaxation activity expected in sustained configurations from transient effects that can persist over a significant fraction of the plasma discharge. We have also developed the numerical capability for studying electrostatic current injection in the spherical torus (ST). These configurations are being investigated as plasma confinement schemes in the international effort to achieve controlled thermonuclear fusion for environmentally benign energy production. Our numerical computations have been performed with the NIMROD code (http://nimrodteam.org) using local computing resources and massively parallel computing hardware at the National Energy Research Scientific Computing Center. Direct comparisons of simulation results for the spheromak with laboratory measurements verify the effectiveness of our numerical approach. The comparisons have been published in refereed journal articles by this group and by collaborators at Lawrence Livermore National Laboratory (see Section 4). In addition to the technical products, this grant has supported the graduate education of the two participating students for three years.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Kang, In-Woong; Beom, In-Gyu; Cho, Ji-Yeon
2016-01-01
Background The Korean-Mini-Mental Status Examination (K-MMSE) is a dementia-screening test that can be easily applied in both community and clinical settings. However, in 20% to 30% of cases, the K-MMSE produces a false negative response. This suggests that it is necessary to evaluate the accuracy of K-MMSE as a screening test for dementia, which can be achieved through comparison of K-MMSE and Seoul Neuropsychological Screening Battery (SNSB)-II results. Methods The study included 713 subjects (male 534, female 179; mean age, 69.3±6.9 years). All subjects were assessed using K-MMSE and SNSB-II tests, the results of which were divided into normal and abnormal in 15 percentile standards. Results The sensitivity of the K-MMSE was 48.7%, with a specificity of 89.9%. The incidence of false positive and negative results totaled 10.1% and 51.2%, respectively. In addition, the positive predictive value of the K-MMSE was 87.1%, while the negative predictive value was 55.6%. The false-negative group showed cognitive impairments in regions of memory and executive function. Subsequently, in the false-positive group, subjects demonstrated reduced performance in memory recall, time orientation, attention, and calculation of K-MMSE items. Conclusion The results obtained in the study suggest that cognitive function might still be impaired even if an individual obtained a normal score on the K-MMSE. If the K-MMSE is combined with tests of memory or executive function, the accuracy of dementia diagnosis could be greatly improved. PMID:27274389
Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert
2016-08-10
Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282
Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck
2014-03-01
Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement. PMID:24606253
NASA Astrophysics Data System (ADS)
Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.
A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.
Some numerical simulation results of swirling flow in d.c. plasma torch
NASA Astrophysics Data System (ADS)
Felipini, C. L.; Pimenta, M. M.
2015-03-01
We present and discuss some results of numerical simulation of swirling flow in d.c. plasma torch, obtained with a two-dimensional mathematical model (MHD model) which was developed to simulate the phenomena related to the interaction between the swirling flow and the electric arc in a non-transferred arc plasma torch. The model was implemented in a computer code based on the Finite Volume Method (FVM) to enable the numerical solution of the governing equations. For the study, cases were simulated with different operating conditions (gas flow rate; swirl number). Some obtained results were compared to the literature and have proved themselves to be in good agreement in most part of computational domain regions. The numerical simulations performed with the computer code enabled the study of the behaviour of the flow in the plasma torch and also study the effects of different swirl numbers on temperature and axial velocity of the plasma flow. The results demonstrated that the developed model is suitable to obtain a better understanding of the involved phenomena and also for the development and optimization of plasma torches.
A method for data handling numerical results in parallel OpenFOAM simulations
NASA Astrophysics Data System (ADS)
Anton, Alin; Muntean, Sebastian
2015-12-01
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit®[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
A method for data handling numerical results in parallel OpenFOAM simulations
Anton, Alin; Muntean, Sebastian
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Werner, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Wernet, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
Wave interpretation of numerical results for the vibration in thin conical shells
NASA Astrophysics Data System (ADS)
Ni, Guangjian; Elliott, Stephen J.
2014-05-01
The dynamic behaviour of thin conical shells can be analysed using a number of numerical methods. Although the overall vibration response of shells has been thoroughly studied using such methods, their physical insight is limited. The purpose of this paper is to interpret some of these numerical results in terms of waves, using the wave finite element, WFE, method. The forced response of a thin conical shell at different frequencies is first calculated using the dynamic stiffness matrix method. Then, a wave finite element analysis is used to calculate the wave properties of the shell, in terms of wave type and wavenumber, as a function of position along it. By decomposing the overall results from the dynamic stiffness matrix analysis, the responses of the shell can then be interpreted in terms of wave propagation. A simplified theoretical analysis of the waves in the thin conical shell is also presented in terms of the spatially-varying ring frequency, which provides a straightforward interpretation of the wave approach. The WFE method provides a way to study the types of wave that travel in thin conical shell structures and to decompose the response of the numerical models into the components due to each of these waves. In this way the insight provided by the wave approach allows us to analyse the significance of different waves in the overall response and study how they interact, in particular illustrating the conversion of one wave type into another along the length of the conical shell.
Recent Analytical and Numerical Results for The Navier-Stokes-Voigt Model and Related Models
NASA Astrophysics Data System (ADS)
Larios, Adam; Titi, Edriss; Petersen, Mark; Wingate, Beth
2010-11-01
The equations which govern the motions of fluids are notoriously difficult to handle both mathematically and computationally. Recently, a new approach to these equations, known as the Voigt-regularization, has been investigated as both a numerical and analytical regularization for the 3D Navier-Stokes equations, the Euler equations, and related fluid models. This inviscid regularization is related to the alpha-models of turbulent flow; however, it overcomes many of the problems present in those models. I will discuss recent work on the Voigt-regularization, as well as a new criterion for the finite-time blow-up of the Euler equations based on their Voigt-regularization. Time permitting, I will discuss some numerical results, as well as applications of this technique to the Magnetohydrodynamic (MHD) equations and various equations of ocean dynamics.
NASA Astrophysics Data System (ADS)
Zueco, Joaquín; López-González, Luis María
2016-04-01
We have studied decompression processes when pressure changes that take place, in blood and tissues using a technical numerical based in electrical analogy of the parameters that involved in the problem. The particular problem analyzed is the behavior dynamics of the extravascular bubbles formed in the intercellular cavities of a hypothetical tissue undergoing decompression. Numerical solutions are given for a system of equations to simulate gas exchanges of bubbles after decompression, with particular attention paid to the effect of bubble size, nitrogen tension, nitrogen diffusivity in the intercellular fluid and in the tissue cell layer in a radial direction, nitrogen solubility, ambient pressure and specific blood flow through the tissue over the different molar diffusion fluxes of nitrogen per time unit (through the bubble surface, between the intercellular fluid layer and blood and between the intercellular fluid layer and the tissue cell layer). The system of nonlinear equations is solved using the Network Simulation Method, where the electric analogy is applied to convert these equations into a network-electrical model, and a computer code (electric circuit simulator, Pspice). In this paper, numerical results new (together to a network model improved with interdisciplinary electrical analogies) are provided.
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. PMID:25744607
O'Brien, James Edward; Sohal, Manohar Singh; Huff, George Albert
2002-08-01
A combined experimental and numerical investigation is under way to investigate heat transfer enhancement techniques that may be applicable to large-scale air-cooled condensers such as those used in geothermal power applications. The research is focused on whether air-side heat transfer can be improved through the use of finsurface vortex generators (winglets,) while maintaining low heat exchanger pressure drop. A transient heat transfer visualization and measurement technique has been employed in order to obtain detailed distributions of local heat transfer coefficients on model fin surfaces. Pressure drop measurements have also been acquired in a separate multiple-tube row apparatus. In addition, numerical modeling techniques have been developed to allow prediction of local and average heat transfer for these low-Reynolds-number flows with and without winglets. Representative experimental and numerical results presented in this paper reveal quantitative details of local fin-surface heat transfer in the vicinity of a circular tube with a single delta winglet pair downstream of the cylinder. The winglets were triangular (delta) with a 1:2 height/length aspect ratio and a height equal to 90% of the channel height. Overall mean fin-surface Nusselt-number results indicate a significant level of heat transfer enhancement (average enhancement ratio 35%) associated with the deployment of the winglets with oval tubes. Pressure drop measurements have also been obtained for a variety of tube and winglet configurations using a single-channel flow apparatus that includes four tube rows in a staggered array. Comparisons of heat transfer and pressure drop results for the elliptical tube versus a circular tube with and without winglets are provided. Heat transfer and pressure-drop results have been obtained for flow Reynolds numbers based on channel height and mean flow velocity ranging from 700 to 6500.
2015-01-01
Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large
NASA Technical Reports Server (NTRS)
Lyons, Walter A.; Pielke, Roger A.; Cotton, William R.; Keen, Cecil S.; Moon, Dennis A.
1992-01-01
Sea breeze thunderstorms during quiescent synoptic conductions account for 40 percent of Florida rainfall, and are the dominant feature of April-October weather at the Kennedy Space Center (KSC). An effort is presently made to assess the feasibility of a mesoscale numerical model in improving the point-specific thunderstorm forecasting accuracy at the KSC, in the 2-12 hour time frame. Attention is given to the Applied Regional Atmospheric Modeling System.
Fluid Instabilities in the Crab Nebula Jet: Results from Numerical Simulations
NASA Astrophysics Data System (ADS)
Mignone, A.; Striani, E.; Bodo, G.; Anjiri, M.
2014-09-01
We present an overview of high-resolution relativistic MHD numerical simulations of the Crab Nebula South-East jet. The models are based on hot and relativistic hollow outflows initially carrying a purely toroidal magnetic field. Our results indicate that weakly relativistic (γ˜ 2) and strongly magnetized jets are prone to kink instabilities leading to a noticeable deflection of the jet. These conclusions are in good agreement with the recent X-ray (Chandra) data of Crab Nebula South-East jet indicating a change in the direction of propagation on a time scale of the order of few years.
NASA Technical Reports Server (NTRS)
Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.
2004-01-01
Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).
Noninvasive assessment of mitral inertness: clinical results with numerical model validation
NASA Technical Reports Server (NTRS)
Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.
NASA Astrophysics Data System (ADS)
Lahaye, Noé; Paci, Alexandre; Smith, Stefan Llewellyn
2016-04-01
We examine the instability of lenticular vortices -- or lenses -- in a stratified rotating fluid. The simplest configuration is one in which the lenses overlay a deep layer and have a free surface, and this can be studied using a two-layer rotating shallow water model. We report results from laboratory experiments and high-resolution direct numerical simulations of the destabilization of vortices with constant potential vorticity, and compare these to a linear stability analysis. The stability properties of the system are governed by two parameters: the typical upper-layer potential vorticity and the size (depth) of the vortex. Good agreement is found between analytical, numerical and experimental results for the growth rate and wavenumber of the instability. The nonlinear saturation of the instability is associated with conversion from potential to kinetic energy and weak emission of gravity waves, giving rise to the formation of coherent vortex multipoles with trapped waves. The impact of flow in the lower layer is examined. In particular, it is shown that the growth rate can be strongly affected and the instability can be suppressed for certain types of weak co-rotating flow.
Re-Computation of Numerical Results Contained in NACA Report No. 496
NASA Technical Reports Server (NTRS)
Perry, Boyd, III
2015-01-01
An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.
Interpretation of high-dimensional numerical results for the Anderson transition
Suslov, I. M.
2014-12-15
The existence of the upper critical dimension d{sub c2} = 4 for the Anderson transition is a rigorous consequence of the Bogoliubov theorem on renormalizability of φ{sup 4} theory. For d ≥ 4 dimensions, one-parameter scaling does not hold and all existent numerical data should be reinterpreted. These data are exhausted by the results for d = 4, 5 from scaling in quasi-one-dimensional systems and the results for d = 4, 5, 6 from level statistics. All these data are compatible with the theoretical scaling dependences obtained from Vollhardt and Wolfle’s self-consistent theory of localization. The widespread viewpoint that d{sub c2} = ∞ is critically discussed.
Asymptotic expansion for stellarator equilibria with a non-planar magnetic axis: Numerical results
NASA Astrophysics Data System (ADS)
Freidberg, Jeffrey; Cerfon, Antoine; Parra, Felix
2012-10-01
We have recently presented a new asymptotic expansion for stellarator equilibria that generalizes the classic Greene-Johnson expansion [1] to allow for 3D equilibria with a non-planar magnetic axis [2]. Our expansion achieves the two goals of reducing the complexity of the three-dimensional MHD equilibrium equations and of describing equilibria in modern stellarator experiments. The end result of our analysis is a set of two coupled partial differential equations for the plasma pressure and the toroidal vector potential which fully determine the stellarator equilibrium. Both equations are advection equations in which the toroidal angle plays the role of time. We show that the method of characteristics, following magnetic field lines, is a convenient way of solving these equations, avoiding the difficulties associated with the periodicity of the solution in the toroidal angle. By combining the method of characteristics with Green's function integrals for the evaluation of the magnetic field due to the plasma current, we obtain an efficient numerical solver for our expansion. Numerical equilibria thus calculated will be given.[4pt] [1] J.M. Greene and J.L. Johnson, Phys. Fluids 4, 875 (1961)[0pt] [2] A.J. Cerfon, J.P. Freidberg, and F.I. Parra, Bull. Am. Phys. Soc. 56, 16 GP9.00081 (2011)
Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia
NASA Astrophysics Data System (ADS)
Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis
2014-05-01
A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.
NASA Astrophysics Data System (ADS)
Carrano, Charles S.; Rino, Charles L.
2016-06-01
We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.
Chaoticity threshold in magnetized plasmas: Numerical results in the weak coupling regime
Carati, A. Benfenati, F.; Maiocchi, A.; Galgani, L.; Zuin, M.
2014-03-15
The present paper is a numerical counterpart to the theoretical work [Carati et al., Chaos 22, 033124 (2012)]. We are concerned with the transition from order to chaos in a one-component plasma (a system of point electrons with mutual Coulomb interactions, in a uniform neutralizing background), the plasma being immersed in a uniform stationary magnetic field. In the paper [Carati et al., Chaos 22, 033124 (2012)], it was predicted that a transition should take place when the electron density is increased or the field decreased in such a way that the ratio ω{sub p}/ω{sub c} between plasma and cyclotron frequencies becomes of order 1, irrespective of the value of the so-called Coulomb coupling parameter Γ. Here, we perform numerical computations for a first principles model of N point electrons in a periodic box, with mutual Coulomb interactions, using as a probe for chaoticity the time-autocorrelation function of magnetization. We consider two values of Γ (0.04 and 0.016) in the weak coupling regime Γ ≪ 1, with N up to 512. A transition is found to occur for ω{sub p}/ω{sub c} in the range between 0.25 and 2, in fairly good agreement with the theoretical prediction. These results might be of interest for the problem of the breakdown of plasma confinement in fusion machines.
NASA Astrophysics Data System (ADS)
Soares, Edson J.; Thompson, Roney L.; Niero, Debora C.
2015-08-01
The immiscible displacement of one viscous liquid by another in a capillary tube is experimentally and numerically analyzed in the low inertia regime with negligible buoyancy effects. The dimensionless numbers that govern the problem are the capillary number Ca and the viscosity ratio of the displaced to the displacing fluids Nμ. In general, there are two output quantities of interest. One is associated to the relation between the front velocity, Ub, and the mean velocity of the displaced fluid, U ¯ 2 . The other is the layer thickness of the displaced fluid that remains attached to the wall. We compute these quantities as mass fractions in order to make them able to be compared. In this connection, the efficiency mass fraction, me, is defined as the complement of the mass fraction of the displaced fluid that leaves the tube while the displacing fluid crosses its length. The geometric mass fraction, mg, is defined as the fraction of the volume of the layer that remains attached to the wall. Because in gas-liquid displacement, these two quantities coincide, it is not uncommon in the literature to use mg as a measure of the displacement efficiency for liquid-liquid displacements. However, as is shown in the present paper, these two quantities have opposite tendencies when we increase the viscosity of the displacing fluid, making this distinction a crucial aspect of the problem. Results from a Galerkin finite element approach are also presented in order to make a comparison. Experimental and numerical results show that while the displacement efficiency decreases, the geometrical fraction increases when the viscosity ratio decreases. This fact leads to different decisions depending on the quantity to be optimized. The quantitative agreement between the numerical and experimental results was not completely achieved, especially for intermediate values of Ca. The reasons for that are still under investigation. The experiments conducted were able to achieve a wide range
NASA Astrophysics Data System (ADS)
Chiu, Ming-Hung; Lai, Chin-Fa; Tan, Chen-Tai; Lin, Yi-Zhi
2011-03-01
This paper presents a study of the lateral and axial resolutions of a transmission laser-scanning angle-deviation microscope (TADM) with different numerical aperture (NA) values. The TADM is based on geometric optics and surface plasmon resonance principles. The surface height is proportional to the phase difference between two marginal rays of the test beam, which is passed through the test medium. We used common-path heterodyne interferometry to measure the phase difference in real time, and used a personal computer to calculate and plot the surface profile. The experimental results showed that the best lateral and axial resolutions for NA = 0.41 were 0.5 μm and 3 nm, respectively, and the lateral resolution breaks through the diffraction limits.
NASA Astrophysics Data System (ADS)
Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.
2016-06-01
The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.
Solar flare model: Comparison of the results of numerical simulations and observations
NASA Astrophysics Data System (ADS)
Podgorny, I. M.; Vashenyuk, E. V.; Podgorny, A. I.
2009-12-01
The electrodynamic flare model is based on numerical 3D simulations with the real magnetic field of an active region. An energy of ˜1032 erg necessary for a solar flare is shown to accumulate in the magnetic field of a coronal current sheet. The thermal X-ray source in the corona results from plasma heating in the current sheet upon reconnection. The hard X-ray sources are located on the solar surface at the loop foot-points. They are produced by the precipitation of electron beams accelerated in field-aligned currents. Solar cosmic rays appear upon acceleration in the electric field along a singular magnetic X-type line. The generation mechanism of the delayed cosmic-ray component is also discussed.
NASA Astrophysics Data System (ADS)
Xu, Hengyi; Heinzel, T.; Zozoulenko, I. V.
2011-09-01
We derive analytical expressions for the conductivity of bilayer graphene (BLG) using the Boltzmann approach within the the Born approximation for a model of Gaussian disorders describing both short- and long-range impurity scattering. The range of validity of the Born approximation is established by comparing the analytical results to exact tight-binding numerical calculations. A comparison of the obtained density dependencies of the conductivity with experimental data shows that the BLG samples investigated experimentally so far are in the quantum scattering regime where the Fermi wavelength exceeds the effective impurity range. In this regime both short- and long-range scattering lead to the same linear density dependence of the conductivity. Our calculations imply that bilayer and single-layer graphene have the same scattering mechanisms. We also provide an upper limit for the effective, density-dependent spatial extension of the scatterers present in the experiments.
Marom, Gil; Bluestein, Danny
2016-02-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
NASA Astrophysics Data System (ADS)
Cotel, Aline; Junghans, Lars; Wang, Xiaoxiang
2014-11-01
In recent years, a recognition of the scope of the negative environmental impact of existing buildings has spurred academic and industrial interest in transforming existing building design practices and disciplinary knowledge. For example, buildings alone consume 72% of the electricity produced annually in the United States; this share is expected to rise to 75% by 2025 (EPA, 2009). Significant reductions in overall building energy consumption can be achieved using green building methods such as natural ventilation. An office was instrumented on campus to acquire CO2 concentrations and temperature profiles at multiple locations while a single occupant was present. Using openFOAM, numerical calculations were performed to allow for comparisons of the CO2 concentration and temperature profiles for different ventilation strategies. Ultimately, these results will be the inputs into a real time feedback control system that can adjust actuators for indoor ventilation and utilize green design strategies. Funded by UM Office of Vice President for Research.
NASA Technical Reports Server (NTRS)
Holman, Gordon
2010-01-01
Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.
Lima da Silva, M.; Sauvage, E.; Brun, P.; Gagnoud, A.; Fautrelle, Y.; Riva, R.
2013-07-01
The process of vitrification in a cold crucible heated by direct induction is used in the fusion of oxides. Its feature is the production of high-purity materials. The high-level of purity of the molten is achieved because this melting technique excludes the contamination of the charge by the crucible. The aim of the present paper is to analyze the hydrodynamic of the vitrification process by direct induction, with the focus in the effects associated with the interaction between the mechanical stirrer and bubbling. Considering the complexity of the analyzed system and the goal of the present work, we simplified the system by not taking into account the thermal and electromagnetic phenomena. Based in the concept of hydraulic similitude, we performed an experimental study and a numerical modeling of the simplified model. The results of these two studies were compared and showed a good agreement. The results presented in this paper in conjunction with the previous work contribute to a better understanding of the hydrodynamics effects resulting from the interaction between the mechanical stirrer and air bubbling in the cold crucible heated by direct induction. Further works will take into account thermal and electromagnetic phenomena in the presence of mechanical stirrer and air bubbling. (authors)
NASA Astrophysics Data System (ADS)
Peukert, P.; Hrubý, J.
2013-04-01
The paper describes new results for an experimental heat exchanger equipped with a single corrugated capillary tube, basic information about the measurements and the experimental setup. Some of the results were compared with numerical simulations.
Pathmanathan, P; Bernabeu, M O; Niederer, S A; Gavaghan, D J; Kay, D
2012-08-01
A recent verification study compared 11 large-scale cardiac electrophysiology solvers on an unambiguously defined common problem. An unexpected amount of variation was observed between the codes, including significant error in conduction velocity in the majority of the codes at certain spatial resolutions. In particular, the results of the six finite element codes varied considerably despite each using the same order of interpolation. In this present study, we compare various algorithms for cardiac electrophysiological simulation, which allows us to fully explain the differences between the solvers. We identify the use of mass lumping as the fundamental cause of the largest variations, specifically the combination of the commonly used techniques of mass lumping and operator splitting, which results in a slightly different form of mass lumping to that supported by theory and leads to increased numerical error. Other variations are explained through the manner in which the ionic current is interpolated. We also investigate the effect of different forms of mass lumping in various types of simulation. PMID:25099569
NASA Astrophysics Data System (ADS)
Beniaiche, Ahmed; Ghenaiet, Adel; Carcasci, Carlo; Facchini, Bruno
2016-05-01
This paper presents a numerical validation of the aero-thermal study of a 30:1 scaled model reproducing an innovative trailing edge with one row of enlarged pedestals under stationary and rotating conditions. A CFD analysis was performed by means of commercial ANSYS-Fluent modeling the isothermal air flow and using k-ω SST turbulence model and an isothermal air flow for both static and rotating conditions (Ro up to 0.23). The used numerical model is validated first by comparing the numerical velocity profiles distribution results to those obtained experimentally by means of PIV technique for Re = 20,000 and Ro = 0-0.23. The second validation is based on the comparison of the numerical results of the 2D HTC maps over the heated plate to those of TLC experimental data, for a smooth surface for a Reynolds number = 20,000 and 40,000 and Ro = 0-0.23. Two-tip conditions were considered: open tip and closed tip conditions. Results of the average Nusselt number inside the pedestal ducts region are presented too. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.
Liberatore, S.; Jaouen, S.; Tabakhoff, E.; Canaud, B.
2009-04-15
Magnetic Rayleigh-Taylor instability is addressed in compressible hydrostatic media. A full model is presented and compared to numerical results from a linear perturbation code. A perfect agreement between both approaches is obtained in a wide range of parameters. Compressibility effects are examined and substantial deviations from classical Chandrasekhar growth rates are obtained and confirmed by the model and the numerical calculations.
Numerical modeling of protocore destabilization during planetary accretion: Methodology and results
NASA Astrophysics Data System (ADS)
Lin, Ja-Ren; Gerya, Taras V.; Tackley, Paul J.; Yuen, David A.; Golabek, Gregor J.
2009-12-01
We developed and tested an efficient 2D numerical methodology for modeling gravitational redistribution processes in a quasi spherical planetary body based on a simple Cartesian grid. This methodology allows one to implement large viscosity contrasts and to handle properly a free surface and self-gravitation. With this novel method we investigated in a simplified way the evolution of gravitationally unstable global three-layer structures in the interiors of large metal-silicate planetary bodies like those suggested by previous models of cold accretion [Sasaki, S., Nakazawa, K., 1986. J. Geophys. Res. 91, 9231-9238; Karato, S., Murthy, V.R., 1997. Phys. Earth Planet Interios 100, 61-79; Senshu, H., Kuramoto, K., Matsui, T., 2002. J. Geophys. Res. 107 (E12), 5118. 10.1029/2001JE001819]: an innermost solid protocore (either undifferentiated or partly differentiated), an intermediate metal-rich layer (either continuous or disrupted), and an outermost silicate-rich layer. Long-wavelength (degree-one) instability of this three-layer structure may strongly contribute to core formation dynamics by triggering planetary-scale gravitational redistribution processes. We studied possible geometrical modes of the resulting planetary reshaping using scaled 2D numerical experiments for self-gravitating planetary bodies with Mercury-, Mars- and Earth-size. In our simplified model the viscosity of each material remains constant during the experiment and rheological effects of gravitational energy dissipation are not taken into account. However, in contrast to a previously conducted numerical study [Honda, R., Mizutani, H., Yamamoto, T., 1993. J. Geophys. Res. 98, 2075-2089] we explored a freely deformable planetary surface and a broad range of viscosity ratios between the metallic layer and the protocore (0.001-1000) as well as between the silicate layer and the protocore (0.001-1000). An important new prediction from our study is that realistic modes of planetary reshaping
NASA Astrophysics Data System (ADS)
Magaraggia, Jessica; Kleinszig, Gerhard; Wei, Wei; Weiten, Markus; Graumann, Rainer; Angelopoulou, Elli; Hornegger, Joachim
2014-03-01
Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.
Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.
2011-01-01
Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Xing, H. L.; Ding, R. W.; Yuen, D. A.
2015-08-01
Australia is surrounded by the Pacific Ocean and the Indian Ocean and, thus, may suffer from tsunamis due to its proximity to the subduction earthquakes around the boundary of Australian Plate. Potential tsunami risks along the eastern coast, where more and more people currently live, are numerically investigated through a scenario-based method to provide an estimation of the tsunami hazard in this region. We have chosen and calculated the tsunami waves generated at the New Hebrides Trench and the Puysegur Trench, and we further investigated the relevant tsunami hazards along the eastern coast and their sensitivities to various sea floor frictions and earthquake parameters (i.e. the strike, the dip and the slip angles and the earthquake magnitude/rupture length). The results indicate that the Puysegur trench possesses a seismic threat causing wave amplitudes over 1.5 m along the coast of Tasmania, Victoria, and New South Wales, and even reaching over 2.6 m at the regions close to Sydney, Maria Island, and Gabo Island for a certain worse case, while the cities along the coast of Queensland are potentially less vulnerable than those on the southeastern Australian coast.
NASA Astrophysics Data System (ADS)
Chan, P. W.
2009-03-01
The Hong Kong International Airport (HKIA) is situated in an area of complex terrain. Turbulent flow due to terrain disruption could occur in the vicinity of HKIA when winds from east to southwest climb over Lantau Island, a mountainous island to the south of the airport. Low-level turbulence is an aviation hazard to the aircraft flying into and out of HKIA. It is closely monitored using remote-sensing instruments including Doppler LIght Detection And Ranging (LIDAR) systems and wind profilers in the airport area. Forecasting of low-level turbulence by numerical weather prediction models would be useful in the provision of timely turbulence warnings to the pilots. The feasibility of forecasting eddy dissipation rate (EDR), a measure of turbulence intensity adopted in the international civil aviation community, is studied in this paper using the Regional Atmospheric Modelling System (RAMS). Super-high resolution simulation (within the regime of large eddy simulation) is performed with a horizontal grid size down to 50 m for some typical cases of turbulent airflow at HKIA, such as spring-time easterly winds in a stable boundary layer and gale-force southeasterly winds associated with a typhoon. Sensitivity of the simulation results with respect to the choice of turbulent kinetic energy (TKE) parameterization scheme in RAMS is also examined. RAMS simulation with Deardorff (1980) TKE scheme is found to give the best result in comparison with actual EDR observations. It has the potential for real-time forecasting of low-level turbulence in short-term aviation applications (viz. for the next several hours).
A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results
NASA Technical Reports Server (NTRS)
Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla
2015-11-01
Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit
Owen, T.E.; Wardlaw, R.
1991-01-01
Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.
NASA Astrophysics Data System (ADS)
Barnes, T.
In this article we review numerical studies of the quantum Heisenberg antiferromagnet on a square lattice, which is a model of the magnetic properties of the undoped “precursor insulators” of the high temperature superconductors. We begin with a brief pedagogical introduction and then discuss zero and nonzero temperature properties and compare the numerical results to analytical calculations and to experiment where appropriate. We also review the various algorithms used to obtain these results, and discuss algorithm developments and improvements in computer technology which would be most useful for future numerical work in this area. Finally we list several outstanding problems which may merit further investigation.
Preliminary results of numerical investigations at SECARB Cranfield, MS field test site
NASA Astrophysics Data System (ADS)
Choi, J.; Nicot, J.; Meckel, T. A.; Chang, K.; Hovorka, S. D.
2008-12-01
The Southeast Regional Carbon Sequestration partnership sponsored by DOE has chosen the Cranfield, MS field as a test site for its Phase II experiment. It will provide information on CO2 storage in oil and gas fields, in particular on storage permanence, storage capacity, and pressure buildup as well as on sweep efficiency. The 10,300 ft-deep reservoir produced 38 MMbbl of oil and 677 MMSCF of gas from the 1940's to the 1960's and is being retrofitted by Denbury Resources for tertiary recovery. CO2 injection started in July 2008 with a scheduled ramp up during the next few months. The Cranfield modeling team selected the northern section of the field for development of a numerical model using the multiphase-flow, compositional CMG-GEM software. Model structure was determined through interpretation of logs from old and recently-drilled wells and geophysical data. PETREL was used to upscale and export permeability and porosity data to the GEM model. Preliminary sensitivity analyses determined that relative permeability parameters and oil composition had the largest impact on CO2 behavior. The first modeling step consisted in history-matching the total oil, gas, and water production out of the reservoir starting from its natural state to determine the approximate current conditions of the reservoir. The fact that pressure recovered in the 40 year interval since end of initial production helps in constraining boundary conditions. In a second step, the modeling focused on understanding pressure evolution and CO2 transport in the reservoir. The presentation will introduce preliminary results of the simulations and confirm/explain discrepancies with field measurements.
NASA Astrophysics Data System (ADS)
Gliko, A. O.; Molodenskii, S. M.
2015-01-01
) are not only capable of significantly changing the magnitude of the radial displacements of the geoid but also altering their sign. Moreover, even in the uniform Earth's model, the effects of sphericity of its external surface and self-gravitation can also provide a noticeable contribution, which determines the signs of the coefficients in the expansion of the geoid's shape in the lower-order spherical functions. In order to separate these effects, below we present the results of the numerical calculations of the total effects of thermoelastic deformations for the two simplest models of spherical Earth without and with self-gravitation with constant density and complex-valued shear moduli and for the real Earth PREM model (which describes the depth distributions of density and elastic moduli for the high-frequency oscillations disregarding the rheology of the medium) and the modern models of the mantle rheology. Based on the calculations, we suggest the simplest interpretation of the present-day data on the relationship between the coefficients of spherical expansion of temperature, velocities of seismic body waves, the topography of the Earth's surface and geoid, and the data on the correlation between the lower-order coefficients in the expansions of the geoid and the corresponding terms of the expansions of horizontal inhomogeneities in seismic velocities. The suggested interpretation includes the estimates of the sign and magnitude for the ratios between the first coefficients of spherical expansions of seismic velocities, topography, and geoid. The presence of this correlation and the relationship between the signs and absolute values of these coefficients suggests that both the long-period oscillations of the geoid and the long-period variations in the velocities of seismic body waves are largely caused by thermoelastic deformations.
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Galvan, Boris; Miller, Stephen
2013-04-01
Fluid-rock interactions are mechanically fundamental to many earth processes, including fault zones and hydrothermal/volcanic systems, and to future green energy solutions such as enhanced geothermal systems and carbon capture and storage (CCS). Modeling these processes is challenging because of the strong coupling between rock fracture evolution and the consequent large changes in the hydraulic properties of the system. In this talk, we present results of a numerical model that includes poro-elastic plastic rheology (with hardening, softening, and damage), and coupled to a non-linear diffusion model for fluid pressure propagation and two-phase fluid flow. Our plane strain model is based on the poro- elastic plastic behavior of porous rock and is advanced with hardening, softening and damage using the Mohr- Coulomb failure criteria. The effective stress model of Biot (1944) is used for coupling the pore pressure and the rock behavior. Frictional hardening and cohesion softening are introduced following Vermeer and de Borst (1984) with the angle of internal friction and the cohesion as functions of the principal strain rates. The scalar damage coefficient is assumed to be a linear function of the hardening parameter. Fluid injection is modeled as a two phase mixture of water and air using the Richards equation. The theoretical model is solved using finite differences on a staggered grid. The model is benchmarked with experiments on the laboratory scale in which fluid is injected from below in a critically-stressed, dry sandstone (Stanchits et al. 2011). We simulate three experiments, a) the failure a dry specimen due to biaxial compressive loading, b) the propagation a of low pressure fluid front induced from the bottom in a critically stressed specimen, and c) the failure of a critically stressed specimen due to a high pressure fluid intrusion. Comparison of model results with the fluid injection experiments shows that the model captures most of the experimental
Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison Allen
We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source
Nonlinearities of waves propagating over a mild-slope beach: laboratory and numerical results
NASA Astrophysics Data System (ADS)
Rocha, Mariana V. L.; Michallet, Hervé; Silva, Paulo A.; Cienfuegos, Rodrigo
2014-05-01
As surface gravity waves propagate from deeper waters to the shore, their shape changes, primarily due to nonlinear wave interactions and further on due to breaking. The nonlinear effects amplify the higher harmonics and cause the oscillatory flow to transform from nearly sinusoidal in deep water, through velocity-skewed in the shoaling zone, to velocity asymmetric in the inner-surf and swash zones. In addition to short-wave nonlinearities, the presence of long waves and wave groups also results in a supplementary wave-induced velocity and influences the short-waves. Further, long waves can themselves contribute to velocity skewness and asymmetry at low frequencies, particularly for very dissipative mild-slope beach profiles, where long wave shoaling and breaking can also occur. The Hydralab-IV GLOBEX experiments were performed in a 110-m-long flume, with a 1/80 rigid-bottom slope and allowed the acquisition of high-resolution free-surface elevation and velocity data, obtained during 90-min long simulations of random and bichromatic wave conditions, and also of a monochromatic long wave (Ruessink et al., Proc. Coastal Dynamics, 2013). The measurements are compared to numerical results obtained with the SERR-1D Boussinesq-type model, which is designed to reproduce the complex dynamics of high-frequency wave propagation, including the energy transfer mechanisms that enhance infragravity-wave generation. The evolution of skewness and asymmetry along the beach profile until the swash zone is analyzed, relatively to that of the wave groupiness and long wave propagation. Some particularities of bichromatic wave groups are further investigated, such as partially-standing long-wave patterns and short-wave reformation after the first breakpoint, which is seen to influence particularly the skewness trends. Decreased spectral width (for random waves) and increased modulation (for bichromatic wave groups) are shown to enhance energy transfers between super- and sub
Dameron, O; Gibaud, B; Morandi, X
2004-06-01
The human cerebral cortex anatomy describes the brain organization at the scale of gyri and sulci. It is used as landmarks for neurosurgery as well as localization support for functional data analysis or inter-subject data comparison. Existing models of the cortex anatomy either rely on image labeling but fail to represent variability and structural properties or rely on a conceptual model but miss the inner 3D nature and relations of anatomical structures. This study was therefore conducted to propose a model of sulco-gyral anatomy for the healthy human brain. We hypothesized that both numeric knowledge (i.e., image-based) and symbolic knowledge (i.e., concept-based) have to be represented and coordinated. In addition, the representation of this knowledge should be application-independent in order to be usable in various contexts. Therefore, we devised a symbolic model describing specialization, composition and spatial organization of cortical anatomical structures. We also collected numeric knowledge such as 3D models of shape and shape variation about cortical anatomical structures. For each numeric piece of knowledge, a companion file describes the concept it refers to and the nature of the relationship. Demonstration software performs a mapping between the numeric and the symbolic aspects for browsing the knowledge base. PMID:15118839
Ohno, Munekazu; Takaki, Tomohiro; Shibuta, Yasushi
2016-01-01
We present the variational formulation of a quantitative phase-field model for isothermal low-speed solidification in a binary dilute alloy with diffusion in the solid. In the present formulation, cross-coupling terms between the phase field and composition field, including the so-called antitrapping current, naturally arise in the time evolution equations. One of the essential ingredients in the present formulation is the utilization of tensor diffusivity instead of scalar diffusivity. In an asymptotic analysis, it is shown that the correct mapping between the present variational model and a free-boundary problem for alloy solidification with an arbitrary value of solid diffusivity is successfully achieved in the thin-interface limit due to the cross-coupling terms and tensor diffusivity. Furthermore, we investigate the numerical performance of the variational model and also its nonvariational versions by carrying out two-dimensional simulations of free dendritic growth. The nonvariational model with tensor diffusivity shows excellent convergence of results with respect to the interface thickness. PMID:26871136
NASA Astrophysics Data System (ADS)
Suzuki, Naoya; Donelan, Mark A.; Plant, William J.
2007-04-01
Observed probability distributions of QuikSCAT scatterometer cross sections are matched to expected distributions calculated using a Geophysical Model Function (GMF) with a wind speed threshold and inherent wind variability on the subfootprint scale and also on grid scales of numerical weather prediction (NWP) models. Two independent approaches are taken: In one, the 3-D sample size is 2° × 2° and 1 day, and the wind speed is assumed to be Rayleigh distributed while directions relative to QuickSCAT antenna directions are assumed to be uniform; in the other, the data are binned by NWP analyzed wind speeds into 1 m/s bins and sample sizes of the grid area of the NWP models. Using the results, the variability on these scales is mapped as a function of wind speed, latitude, and season in an effort to establish a global climatology of wind-speed variability. On the basis of the stable calibration of QuikSCAT, the bias of surface winds produced by the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) is shown to be substantial and strongly dependent on wind speed, latitude, and season. Changes in wind-speed variability with changes in averaging scale are further explored and estimates of the kinetic energy spectra of the mesoscale to basin-scale winds are determined.
NASA Astrophysics Data System (ADS)
Ohno, Munekazu; Takaki, Tomohiro; Shibuta, Yasushi
2016-01-01
We present the variational formulation of a quantitative phase-field model for isothermal low-speed solidification in a binary dilute alloy with diffusion in the solid. In the present formulation, cross-coupling terms between the phase field and composition field, including the so-called antitrapping current, naturally arise in the time evolution equations. One of the essential ingredients in the present formulation is the utilization of tensor diffusivity instead of scalar diffusivity. In an asymptotic analysis, it is shown that the correct mapping between the present variational model and a free-boundary problem for alloy solidification with an arbitrary value of solid diffusivity is successfully achieved in the thin-interface limit due to the cross-coupling terms and tensor diffusivity. Furthermore, we investigate the numerical performance of the variational model and also its nonvariational versions by carrying out two-dimensional simulations of free dendritic growth. The nonvariational model with tensor diffusivity shows excellent convergence of results with respect to the interface thickness.
Numerical Analysis of Large Telescopes in Terms of Induced Loads and Resulting Geometrical Stability
NASA Astrophysics Data System (ADS)
Upnere, S.; Jekabsons, N.; Joffe, R.
2013-03-01
Comprehensive numerical studies, involving structural and Computational Fluid Dynamics (CFD) analysis, have been carried out at the Engineering Research Institute "Ventspils International Radio Astron- omy Center" (VIRAC) of the Ventspils University College to investigate the gravitational and wind load effects on large, ground-based radio tele- scopes RT-32 performance. Gravitational distortions appear to be the main limiting factor for the reflector performance in everyday operation. Random loads caused by wind gusts (unavoidable at zenith) contribute to the fatigue accumulation.
Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results
NASA Technical Reports Server (NTRS)
Lee, Nam C.; Parks, George K.
1992-01-01
A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.
Coupled transport processes in semipermeable media. Part 2: Numerical method and results
NASA Astrophysics Data System (ADS)
Jacobsen, Janet S.; Carnahan, Chalon L.
1990-04-01
A numerical simulator has been developed to investigate the effects of coupled processes on heat and mass transport in semipermeable media. The governing equations on which the simulator is based were derived using the thermodynamics of irreversible processes. The equations are nonlinear and have been solved numerically using the n-dimensional Newton's method. As an example of an application, the numerical simulator has been used to investigate heat and solute transport in the vicinity of a heat source buried in a saturated clay-like medium, in part to study solute transport in bentonite packing material surrounding a nuclear waste canister. The coupled processes considered were thermal filtration, thermal osmosis, chemical osmosis and ultrafiltration. In the simulations, heat transport by coupled processes was negligible compared to heat conduction, but pressure and solute migration were affected. Solute migration was retarded relative to the uncoupled case when only chemical osmosis was considered. When both chemical osmosis and thermal osmosis were included, solute migration was enhanced.
On the Standardization of Vertical Accuracy Figures in Dems
NASA Astrophysics Data System (ADS)
Casella, V.; Padova, B.
2013-01-01
Digital Elevation Models (DEMs) play a key role in hydrological risk prevention and mitigation: hydraulic numeric simulations, slope and aspect maps all heavily rely on DEMs. Hydraulic numeric simulations require the used DEM to have a defined accuracy, in order to obtain reliable results. Are the DEM accuracy figures clearly and uniquely defined? The paper focuses on some issues concerning DEM accuracy definition and assessment. Two DEM accuracy definitions can be found in literature: accuracy at the interpolated point and accuracy at the nodes. The former can be estimated by means of randomly distributed check points, while the latter by means of check points coincident with the nodes. The two considered accuracy figures are often treated as equivalent, but they aren't. Given the same DEM, assessing it through one or the other approach gives different results. Our paper performs an in-depth characterization of the two figures and proposes standardization coefficients.
NASA Astrophysics Data System (ADS)
Morvan, D.
2010-12-01
behaviour of forest fires, based on a multiphase formulation. This approach consists in solving the balance equations (mass, momentum, energy, chemical species, radiation intensity …) governing the coupled system formed by the vegetation and the surrounding atmosphere. The vegetation was represented as a collection of solid fuel particles, regrouped in families, each one characterized by its own set of physical variables (mass fraction of water, of dry matter, of char, temperature, volume fraction, density, surface area to volume ratio …) necessary to describe the evolution of its state during the propagation of fire. Some numerical results were then presented and compared with available experimental data. A particular attention was taken to simulate surface fires propagating through grassland and Mediterranean shrubland for which a large experimental data base exists. We conclude our paper, in presenting some recent results obtained in a more operational context, to simulate the interaction between two fire fronts (head fire and backfire) in conditions similar to two those encountered during a suppression fire operation.
Bauman, R A; Widholm, J J; Petras, J M; McBride, K; Long, J B
2000-08-01
The purpose of this study was to determine the impact of secondary hypoxemia on visual discrimination accuracy after parasagittal fluid percussion injury (FPI). Rats lived singly in test cages, where they were trained to repeatedly execute a flicker-frequency visual discrimination for food. After learning was complete, all rats were surgically prepared and then retested over the following 4-5 days to ensure recovery to presurgery levels of performance. Rats were then assigned to one of three groups [FPI + Hypoxia (IH), FPI + Normoxia (IN), or Sham Injury + Hypoxia (SH)] and were anesthetized with halothane delivered by compressed air. Immediately after injury or sham injury, rats in groups IH and SH were switched to a 13% O2 source to continue halothane anesthesia for 30 min before being returned to their test cages. Anesthesia for rats in group IN was maintained using compressed air for 30 min after injury. FPI significantly reduced visual discrimination accuracy and food intake, and increased incorrect choices. Thirty minutes of immediate posttraumatic hypoxemia significantly (1) exacerbated the FPI-induced reductions of visual discrimination accuracy and food intake, (2) further increased numbers of incorrect choices, and (3) delayed the progressive recovery of visual discrimination accuracy. Thionine stains of midbrain coronal sections revealed that, in addition to the loss of neurons seen in several thalamic nuclei following FPI, cell loss in the ipsilateral dorsal lateral geniculate nucleus (dLG) was significantly greater after FPI and hypoxemia than after FPI alone. In contrast, neuropathological changes were not evident following hypoxemia alone. These results show that, although hypoxemia alone was without effect, posttraumatic hypoxemia exacerbates FPI-induced reductions in visual discrimination accuracy and secondary hypoxemia interferes with control of the rat's choices by flicker frequency, perhaps in part as a result of neuronal loss and fiber
Numerical model of the lowermost Mississippi River as an alluvial-bedrock reach: preliminary results
NASA Astrophysics Data System (ADS)
Viparelli, E.; Nittrouer, J. A.; Mohrig, D. C.; Parker, G.
2012-12-01
Recent field studies reveal that the river bed of the Lower Mississippi River is characterized by a transition from alluvium (upstream) to bedrock (downstream). In particular, in the downstream 250 km of the river, fields of actively migrating bedforms alternate with deep zones where a consolidated substratum is exposed. Here we present a first version of a one-dimensional numerical model able to capture the alluvial-bedrock transition in the lowermost Mississippi River, defined herein as the 500-km reach between the Old River Control Structure and the Gulf of Mexico. The flow is assumed to be steady, and the cross-section is divided in two regions, the river channel and the floodplain. The streamwise variation of channel and floodplain geometry is described with synthetic relations derived from field observations. Flow resistance in the river channel is computed with the formulation for low-slope, large sand bed rivers due to Wright and Parker, while a Chezy-type formulation is implemented on the floodplain. Sediment is modeled in terms of bed material and wash load. Suspended load is computed with the Wright-Parker formulation. This treatment allows either uniform sediment or a mixture of different grain sizes, and accounts for stratification effects. Bedload transport rates are estimated with the relation for sediment mixtures of Ashida and Michiue. Previous work documents reasonable agreement between these load relations and field measurements. Washload is routed through the system solving the equation of mass conservation of sediment in suspension in the water column. The gradual transition from the alluvial reach to the bedrock reach is modeled in terms of a "mushy" layer of specified thickness overlying the non-erodible substrate. In the case of a fully alluvial reach, the channel bed elevation is above this mushy layer, while in the case of partial alluvial cover of the substratum, the channel bed elevation is within the mushy layer. Variations in base
Ponderomotive stabilization of flute modes in mirrors Feedback control and numerical results
NASA Technical Reports Server (NTRS)
Similon, P. L.
1987-01-01
Ponderomotive stabilization of rigid plasma flute modes is numerically investigated by use of a variational principle, for a simple geometry, without eikonal approximation. While the near field of the studied antenna can be stabilizing, the far field has a small contribution only, because of large cancellation by quasi mode-coupling terms. The field energy for stabilization is evaluated and is a nonnegligible fraction of the plasma thermal energy. A new antenna design is proposed, and feedback stabilization is investigated. Their use drastically reduces power requirements.
Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.
1981-03-01
The Comparative Thermal-Hydraulic Evaluation of Steam Generators program represents an analytical investigation of the thermal-hydraulic characteristics of four PWR steam generators. The analytical tool utilized in this investigation is the CALIPSOS code, a three-dimensional flow distribution code. This report presents the steady state thermal-hydraulic characteristics on the secondary side of a Westinghouse Model 51 steam generator. Details of the CALIPSOS model with accompanying assumptions, operating parameters, and transport correlations are identified. Comprehensive graphical and numerical results are presented to facilitate the desired comparison with other steam generators analyzed by the same flow distribution code.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is
NASA Astrophysics Data System (ADS)
Conti, Livia; De Gregorio, Paolo; Bonaldi, Michele; Borrielli, Antonio; Crivellari, Michele; Karapetyan, Gagik; Poli, Charles; Serra, Enrico; Thakur, Ram-Krishna; Rondoni, Lamberto
2012-06-01
We study experimentally, numerically, and theoretically the elastic response of mechanical resonators along which the temperature is not uniform, as a consequence of the onset of steady-state thermal gradients. Two experimental setups and designs are employed, both using low-loss materials. In both cases, we monitor the resonance frequencies of specific modes of vibration, as they vary along with variations of temperatures and of temperature differences. In one case, we consider the first longitudinal mode of vibration of an aluminum alloy resonator; in the other case, we consider the antisymmetric torsion modes of a silicon resonator. By defining the average temperature as the volume-weighted mean of the temperatures of the respective elastic sections, we find out that the elastic response of an object depends solely on it, regardless of whether a thermal gradient exists and, up to 10% imbalance, regardless of its magnitude. The numerical model employs a chain of anharmonic oscillators, with first- and second-neighbor interactions and temperature profiles satisfying Fourier's Law to a good degree. Its analysis confirms, for the most part, the experimental findings and it is explained theoretically from a statistical mechanics perspective with a loose notion of local equilibrium.
Estimation of geopotential from satellite-to-satellite range rate data: Numerical results
NASA Technical Reports Server (NTRS)
Thobe, Glenn E.; Bose, Sam C.
1987-01-01
A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.
Interaction of a mantle plume and a segmented mid-ocean ridge: Results from numerical modeling
NASA Astrophysics Data System (ADS)
Georgen, Jennifer E.
2014-04-01
Previous investigations have proposed that changes in lithospheric thickness across a transform fault, due to the juxtaposition of seafloor of different ages, can impede lateral dispersion of an on-ridge mantle plume. The application of this “transform damming” mechanism has been considered for several plume-ridge systems, including the Reunion hotspot and the Central Indian Ridge, the Amsterdam-St. Paul hotspot and the Southeast Indian Ridge, the Cobb hotspot and the Juan de Fuca Ridge, the Iceland hotspot and the Kolbeinsey Ridge, the Afar plume and the ridges of the Gulf of Aden, and the Marion/Crozet hotspot and the Southwest Indian Ridge. This study explores the geodynamics of the transform damming mechanism using a three-dimensional finite element numerical model. The model solves the coupled steady-state equations for conservation of mass, momentum, and energy, including thermal buoyancy and viscosity that is dependent on pressure and temperature. The plume is introduced as a circular thermal anomaly on the bottom boundary of the numerical domain. The center of the plume conduit is located directly beneath a spreading segment, at a distance of 200 km (measured in the along-axis direction) from a transform offset with length 100 km. Half-spreading rate is 0.5 cm/yr. In a series of numerical experiments, the buoyancy flux of the modeled plume is progressively increased to investigate the effects on the temperature and velocity structure of the upper mantle in the vicinity of the transform. Unlike earlier studies, which suggest that a transform always acts to decrease the along-axis extent of plume signature, these models imply that the effect of a transform on plume dispersion may be complex. Under certain ranges of plume flux modeled in this study, the region of the upper mantle undergoing along-axis flow directed away from the plume could be enhanced by the three-dimensional velocity and temperature structure associated with ridge
NASA Astrophysics Data System (ADS)
Blecka, Maria I.
2010-05-01
The passive remote spectrometric methods are important in examinations the atmospheres of planets. The radiance spectra inform us about values of thermodynamical parameters and composition of the atmospheres and surfaces. The spectral technology can be useful in detection of the trace aerosols like biological substances (if present) in the environments of the planets. We discuss here some of the aspects related to the spectroscopic search for the aerosols and dust in planetary atmospheres. Possibility of detection and identifications of biological aerosols with a passive InfraRed spectrometer in an open-air environment is discussed. We present numerically simulated, based on radiative transfer theory, spectroscopic observations of the Earth atmosphere. Laboratory measurements of transmittance of various kinds of aerosols, pollens and bacterias were used in modeling.
NASA Technical Reports Server (NTRS)
Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.
2012-01-01
A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.
Numerical results on the transcendence of constants involving pi, e, and Euler's constant
NASA Technical Reports Server (NTRS)
Bailey, David H.
1988-01-01
The existence of simple polynomial equations (integer relations) for the constants e/pi, e + pi, log pi, gamma (Euler's constant), e exp gamma, gamma/e, gamma/pi, and log gamma is investigated by means of numerical computations. The recursive form of the Ferguson-Fourcade algorithm (Ferguson and Fourcade, 1979; Ferguson, 1986 and 1987) is implemented on the Cray-2 supercomputer at NASA Ames, applying multiprecision techniques similar to those described by Bailey (1988) except that FFTs are used instead of dual-prime-modulus transforms for multiplication. It is shown that none of the constants has an integer relation of degree eight or less with coefficients of Euclidean norm 10 to the 9th or less.
NASA Astrophysics Data System (ADS)
Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration
2016-03-01
Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.
NASA Astrophysics Data System (ADS)
Li, Baishou; Gao, Yujiu
2015-12-01
The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840
NASA Technical Reports Server (NTRS)
Rigby, D. L.; Van Fossen, G. J.
1992-01-01
A study of the effect of spanwise variation on leading edge heat transfer is presented. Experimental and numerical results are given for a circular leading edge and for a 3:1 elliptical leading edge. It is demonstrated that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.
Numerical study of the wind energy potential in Bulgaria - Some preliminary results
NASA Astrophysics Data System (ADS)
Jordanov, G.; Gadzhev, G.; Ganev, K.; Miloshev, N.; Syrakov, D.; Prodanova, M.
2012-10-01
The new energy efficiency politics of the EU requires till year 2020 16% of Bulgarian electricity to be produced from renewable sources. The wind is one of renewable energy sources. The ecological benefits of all the kinds of "green" energy are obvious. It is desirable, however, the utilization of renewable energy sources to be as much as possible economically effective. This means that installment of the respective devices (wind farms, solar farms, etc.) should be based on a detailed and reliable evaluation of the real potential of the country. Detailed study of the wind energy potential of the country - spatial distribution, temporal variation, mean and extreme values, fluctuations and statistical characteristics; evaluation from a point of view of industrial applicability can not be made only on the basis of the existing routine meteorological data - the measuring network is not dense enough to catch all the details of the local flow systems, hence of the real wind energy potential of the country spatial distribution. That is why the measurement data has to be supplemented by numerical modeling. The wind field simulations were performed applying the 5th generation PSU/NCAR Meso-Meteorological Model MM5 for years 2000-2007 with a spatial resolution of 3 km over Bulgaria. Some preliminary evaluations of the country wind energy potential, based on the simulation output are demonstrated in the paper.
Mazza, Fabio; Vulcano, Alfonso
2008-07-08
For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.
Accretion of rotating fluids by barytropes - Numerical results for white-dwarf models
NASA Technical Reports Server (NTRS)
Durisen, R. H.
1977-01-01
Numerical sequences of rotating axisymmetric nonmagnetic equilibrium models are constructed which represent the evolution of a barytropic star as it accretes material from a rotating medium. Two accretion geometries are considered - one approximating accretion from a rotating cloud and the other, accretion from a Keplerian disk. It is assumed that some process, such as Ekman spin-up or nonequilibrium oscillations, maintains nearly constant angular velocity along cylinders about the rotation axis. Transport of angular momentum in the cylindrically radial direction by viscosity is included. Fluid instabilities and other physical processes leading to enhancement of this transport are discussed. Particular application is made to zero-temperature white-dwarf models, using the degenerate electron equation of state. An initially nonrotating 0.566-solar-mass white dwarf is followed during the accretion of more than one solar mass of material. Applications to degenerate stellar cores, to mass-transfer binary systems containing white dwarfs, such as novae and dwarf novae, to Type I supernovae, and to galactic X-ray sources are considered.
Preliminary Results from Numerical Experiments on the Summer 1980 Heat Wave and Drought
NASA Technical Reports Server (NTRS)
Wolfson, N.; Atlas, R.; Sud, Y. C.
1985-01-01
During the summer of 1980, a prolonged heat wave and drought affected the United States. A preliminary set of experiments has been conducted to study the effect of varying boundary conditions on the GLA model simulation of the heat wave. Five 10-day numerical integrations with three different specifications of boundary conditions were carried out: a control experiment which utilized climatological boundary conditions, an SST experiment which utilized summer 1980 sea-surface temperatures in the North Pacific, but climatological values elsewhere, and a Soil Moisture experiment which utilized the values of Mintz-Serafini for the summer, 1980. The starting dates for the five forecasts were 11 June, 7 July, 21 July, 22 August, and 6 September of 1980. These dates were specifically chosen as days when a heat wave was already established in order to investigate the effect of soil moistures or North Pacific sea-surface temperatures on the model's ability to maintain the heat wave pattern. The experiments were evaluated in terms of the heat wave index for the South Plains, North Plains, Great Plains and the entire U.S. In addition a subjective comparison of map patterns has been performed.
NASA Astrophysics Data System (ADS)
Szeremley, Daniel; Mussenbrock, Thomas; Brinkmann, Ralf Peter; Zimmermanns, Marc; Rolfes, Ilona; Eremin, Denis; Ruhr-University Bochum, Theoretical Electrical Engineering Team; Ruhr-University Bochum, Institute of Microwave Systems Team
2015-09-01
The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for depositing thin films of e.g. silicon oxid on the inner surface of such PET bottles. The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma and the related heating of electrons inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on a numerical approach. We study how modes of guided waves are propagating under different conditions, if at all. The authors gratefully acknowledge the financial support of the German Research Foundation (DFG) within the framework of the collaborative research centre TRR87.
Recent results from numerical models of the Caribbean Sea and Gulf of Mexico: Do they all agree?
NASA Astrophysics Data System (ADS)
Sheinbaum, J.
2013-05-01
A great variety of numerical models of the Caribbean Sea and Gulf of Mexico have been developed over the years. They all reproduce the basic features of the circulation in the region but do not necessarily agree in the dynamics that explains them. We review recent results related to: 1) semiannual and interannual eddy variability in the Caribbean and their possible role in determining the extension of the western Atlantic warm pool. 2) Loop Current and its eddy shedding dynamics and 3) the deep circulation in the Gulf of Mexico. Recent observations of inertial wave trapping by eddies suggest new veins for numerical research and model comparisons.
Lee, Chia-Ching; Lin, Shang-Chih; Wu, Shu-Wei; Li, Yu-Ching; Fu, Ping-Yuen
2012-10-01
The holding power of the bone-screw interfaces is one of the key factors in the clinical performance of screw design. The value of the holding power can be experimentally measured by pullout tests. Historically, some researchers have used the finite-element method to simulate the holding power of the different screws. Among them, however, the assumed displacement of the screw withdrawal is unreasonably small (about 0.005-1.0 mm). In addition, the chosen numerical indices are quite different, including maximum stress, strain energy, and reaction force. This study systematically uses dental, traumatic, and spinal screws to experimentally measure and numerically simulate their bone-purchasing ability within the synthetic bone. The testing results (pullout displacement and holding power) and numerical indices (maximum stress, total strain energy, and reaction forces) are chosen to calculate their correlation coefficients. The pullout displacement is divided into five regions from initial to final withdrawal. The experimental results demonstrate that the pullout displacement consistently occurs at the final region (0.6-1.6 mm) and is significantly higher than the assumed value of the literature studies. For all screw groups, the measured holding power within the initial region is not highly or even negatively correlated with the experimental and numerical results within the final region. The observation from the simulative results shows the maximum stress only reflects the loads concentrated at some local site(s) and is the least correlated to the measured holding power. Comparatively, both energy and force are more global indices to correlate with the gross failure at the bone-screw interfaces. However, the energy index is not suitable for the screw groups with rather tiny threads compared with the other specifications. In conclusion, the underestimated displacement leads to erroneous results in the screw-pullout simulation. Among three numerical indices the reaction
Hidden modes in open disordered media: analytical, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Bliokh, Yury P.; Freilikher, Valentin; Shi, Z.; Genack, A. Z.; Nori, Franco
2015-11-01
We explore numerically, analytically, and experimentally the relationship between quasi-normal modes (QNMs) and transmission resonance (TR) peaks in the transmission spectrum of one-dimensional (1D) and quasi-1D open disordered systems. It is shown that for weak disorder there exist two types of the eigenstates: ordinary QNMs which are associated with a TR, and hidden QNMs which do not exhibit peaks in transmission or within the sample. The distinctive feature of the hidden modes is that unlike ordinary ones, their lifetimes remain constant in a wide range of the strength of disorder. In this range, the averaged ratio of the number of transmission peaks {N}{{res}} to the number of QNMs {N}{{mod}}, {N}{{res}}/{N}{{mod}}, is insensitive to the type and degree of disorder and is close to the value \\sqrt{2/5}, which we derive analytically in the weak-scattering approximation. The physical nature of the hidden modes is illustrated in simple examples with a few scatterers. The analogy between ordinary and hidden QNMs and the segregation of superradiant states and trapped modes is discussed. When the coupling to the environment is tuned by an external edge reflectors, the superradiance transition is reproduced. Hidden modes have been also found in microwave measurements in quasi-1D open disordered samples. The microwave measurements and modal analysis of transmission in the crossover to localization in quasi-1D systems give a ratio of {N}{{res}}/{N}{{mod}} close to \\sqrt{2/5}. In diffusive quasi-1D samples, however, {N}{{res}}/{N}{{mod}} falls as the effective number of transmission eigenchannels M increases. Once {N}{{mod}} is divided by M, however, the ratio {N}{{res}}/{N}{{mod}} is close to the ratio found in 1D.
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
Spiegal, R.J.
1984-08-01
For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict the resultant thermal response after exposure to EM fields. These calculations are difficult and involved because the human thermoregulatory system is very complex. In this paper, the important numerical models are reviewed and possibilities for future development are discussed.
222Rn transport in a fractured crystalline rock aquifer: Results from numerical simulations
Folger, P.F.; Poeter, E.; Wanty, R.B.; Day, W.; Frishman, D.
1997-01-01
Dissolved 222Rn concentrations in ground water from a small wellfield underlain by fractured Middle Proterozoic Pikes Peak Granite southwest of Denver, Colorado range from 124 to 840 kBq m-3 (3360-22700 pCi L-1). Numerical simulations of flow and transport between two wells show that differences in equivalent hydraulic aperture of transmissive fractures, assuming a simplified two-fracture system and the parallel-plate model, can account for the different 222Rn concentrations in each well under steady-state conditions. Transient flow and transport simulations show that 222Rn concentrations along the fracture profile are influenced by 222Rn concentrations in the adjoining fracture and depend on boundary conditions, proximity of the pumping well to the fracture intersection, transmissivity of the conductive fractures, and pumping rate. Non-homogeneous distribution (point sources) of 222Rn parent radionuclides, uranium and 226Ra, can strongly perturb the dissolved 222Rn concentrations in a fracture system. Without detailed information on the geometry and hydraulic properties of the connected fracture system, it may be impossible to distinguish the influence of factors controlling 222Rn distribution or to determine location of 222Rn point sources in the field in areas where ground water exhibits moderate 222Rn concentrations. Flow and transport simulations of a hypothetical multifracture system consisting of ten connected fractures, each 10 m in length with fracture apertures ranging from 0.1 to 1.0 mm, show that 222Rn concentrations at the pumping well can vary significantly over time. Assuming parallel-plate flow, transmissivities of the hypothetical system vary over four orders of magnitude because transmissivity varies with the cube of fracture aperture. The extreme hydraulic heterogeneity of the simple hypothetical system leads to widely ranging 222Rn values, even assuming homogeneous distribution of uranium and 226Ra along fracture walls. Consequently, it is
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method. PMID:18238262
Multi-Country Experience in Delivering a Joint Course on Software Engineering--Numerical Results
ERIC Educational Resources Information Center
Budimac, Zoran; Putnik, Zoran; Ivanovic, Mirjana; Bothe, Klaus; Zdravkova, Katerina; Jakimovski, Boro
2014-01-01
A joint course, created as a result of a project under the auspices of the "Stability Pact of South-Eastern Europe" and DAAD, has been conducted in several Balkan countries: in Novi Sad, Serbia, for the last six years in several different forms, in Skopje, FYR of Macedonia, for two years, for several types of students, and in Tirana,…
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Domínguez, I.; Bacon, C.; Clifford, B.; Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.
2012-07-01
We describe a new astrophysical version of a cell-based adaptive mesh refinement code ALLA for reactive flow fluid dynamic simulations, including a new implementation of α-network nuclear kinetics, and present preliminary results of first three-dimensional simulations of incomplete carbon-oxygen detonation in Type Ia Supernovae.
NASA Technical Reports Server (NTRS)
Rigby, D. L.; Vanfossen, G. J.
1992-01-01
A study of the effect of spanwise variation in momentum on leading edge heat transfer is discussed. Numerical and experimental results are presented for both a circular leading edge and a 3:1 elliptical leading edge. Reynolds numbers in the range of 10,000 to 240,000 based on leading edge diameter are investigated. The surface of the body is held at a constant uniform temperature. Numerical and experimental results with and without spanwise variations are presented. Direct comparison of the two-dimensional results, that is, with no spanwise variations, to the analytical results of Frossling is very good. The numerical calculation, which uses the PARC3D code, solves the three-dimensional Navier-Stokes equations, assuming steady laminar flow on the leading edge region. Experimentally, increases in the spanwise-averaged heat transfer coefficient as high as 50 percent above the two-dimensional value were observed. Numerically, the heat transfer coefficient was seen to increase by as much as 25 percent. In general, under the same flow conditions, the circular leading edge produced a higher heat transfer rate than the elliptical leading edge. As a percentage of the respective two-dimensional values, the circular and elliptical leading edges showed similar sensitivity to span wise variations in momentum. By equating the root mean square of the amplitude of the spanwise variation in momentum to the turbulence intensity, a qualitative comparison between the present work and turbulent results was possible. It is shown that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.
Preliminary numerical modeling results - cone penetrometer (CPT) tip used as an electrode
Ramirez, A L
2006-12-19
Figure 1 shows the resistivity models considered in this study; log10 of the resistivity is shown. The graph on the upper left hand side shows a hypothetical resisitivity well log measured along a well in the upper layered model; 10% Gaussian noise has been added to the well log data. The lower model is identical to the upper one except for one square area located within the second deepest layer. Figure 2 shows the electrode configurations considered. The ''reference'' case (upper frame) considers point electrodes located along the surface and along a vertical borehole. The ''CPT electrode'' case (middle frame) assumes that the CPT tip serves as an electrode that is electrically connected to the push rod; the surface electrodes are used in conjuction with the moving CPT electrode. The ''isolated CPT electrode'' case assumes that the electrode at the CPT tip is electrically isolated from the pushrod. Note that the separate CPT push rods in the middle and lower frames are shown separated to clarify the figure; in reality, there is only one pushrod that is changing length as the probe advances. Figure 3 shows three pole-pole measurement schemes were considered; in all cases, the ''get lost'' electrodes were the leftmost and rightmost surface electrodes. The top frame shows the reference scheme where all surface and borehole electrodes can be used. The middle frame shows two possible configurations available when a CPT mounted electrode is used. Note that only one of the four poles can be located along the borehole at any given time; electrode combinations such as the one depicted in blue (upper frame) are not possible in this case. The bottom frame shows a sample configuration where only the surface electrodes are used. Figure 4 shows the results obtained for the various measurement schemes. The white lines show the outline of the true model (shown in Figure 1, upper frame). The starting initial model for these inversions is based on the electrical resistivity log
Spallative nucleosynthesis in supernova remnants. II. Time-dependent numerical results
NASA Astrophysics Data System (ADS)
Parizot, Etienne; Drury, Luke
1999-06-01
We calculate the spallative production of light elements associated with the explosion of an isolated supernova in the interstellar medium, using a time-dependent model taking into account the dilution of the ejected enriched material and the adiabatic energy losses. We first derive the injection function of energetic particles (EPs) accelerated at both the forward and the reverse shock, as a function of time. Then we calculate the Be yields obtained in both cases and compare them to the value implied by the observational data for metal-poor stars in the halo of our Galaxy, using both O and Fe data. We find that none of the processes investigated here can account for the amount of Be found in these stars, which confirms the analytical results of Parizot & Drury (1999). We finally analyze the consequences of these results for Galactic chemical evolution, and suggest that a model involving superbubbles might alleviate the energetics problem in a quite natural way.
Collisional evolution in the Eos and Koronis asteroid families - Observational and numerical results
NASA Technical Reports Server (NTRS)
Binzel, Richard P.
1988-01-01
The origin and evolution of the Eos and Koronis families are addressed by an analysis of Binzel's (1987) observational results. The Maxwellian distribution of the Eos family's rotation rates implies a collisionally-evolved population; these rates are also faster than those of the Koronis family and nonfamily asteroids. While the age of the Eos family may be comparable to the solar system's, that of the Koronis family could be considerably younger. Greater shape irregularity may account for the Koronis family's higher mean lightcurve amplitude.
Wang, Zhan-Shan; Pan, Li-Bo
2014-03-01
The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively. PMID:24881370
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
NASA Astrophysics Data System (ADS)
Helsdon, John H.; Farley, Richard D.
1987-05-01
A recently developed Storm Electrification Model (SEM) has been used to simulate the July 19, 1981, Cooperative Convective Precipitation Experiment (CCOPE) case study cloud. This part of the investigation examines the comparison between the model results and the observations of the actual cloud with respect to its nonelectrical aspects. A timing equivalence is established between the simulation and observations based on an explosive growth phase which was both observed and modeled. This timing equivalence is used as a basis upon which the comparisons are made. The model appears to do a good job of reproducing (in both space and time) many of the observed characteristics of the cloud. These include: (1) the general cloud appearance; (2) cloud size; (3) cloud top rise rate; (4) rapid growth phase; (5) updraft structure; (6) first graupel appearance; (7) first radar echo; (8) qualitative radar range-height indicator evolution; (9) cloud decay; and (10) the location of hydrometers with respect to the updraft/-downdraft structure. Some features that are not accurately modeled are the cloud base height, the maximum liquid water content, and the time from first formation of precipitation until it reaches the ground. While the simulation is not perfect, the faithfulness of the model results to the observations is sufficient to give us confidence that the microphysical processes active in this storm are adequately represented in the model physics. Areas where model improvement is indicated are also discussed.
Numerical predictions and experimental results of a dry bay fire environment.
Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca
2003-11-01
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
Urban Surface Network In Marseille: Network Optimization Using Numerical Simulations and Results
NASA Astrophysics Data System (ADS)
Pigeon, G.; Lemonsu, A.; Durand, P.; Masson, V.
During the ESCOMPTE program (Field experiment to constrain models of atmo- spheric pollution and emissions transport) in Marseille between june and july 2001 an important device has been set up to describe the urban boundary layer over the built-up aera of Marseille. There was notably a network of 20 temperature and humid- ity sensors which has mesured the spatial and temporal variability of these parameters. Before the experiment the arrangement of the network had been optimized to get the maximum of information about these two varaibilities. We have worked on results of high resolution simulations containing the TEB scheme which represents the energy budgets associated with the gobal street geometry of the mesh. First, a qualitative analysis had enabled the identification of the characteristical phenomenons over the town of Marseille. There are narrows links beetween urban effects and local effects : marine advection and orography. Then, a quantitative analysis of the field has been developped. EOF (empirical orthogonal functions) have been used to characterised the spatial and temporal structures of the field evolution. Instrumented axis have been determined with all these results. Finally, we have choosen very carefully the locations of the instruments at the scale of the street to avoid that micro-climatic effects interfere with the meso-scale effect of the town. The recording of the mesurements, every 10 minutes, had started on the 12th of june and had finished on the 16th of july. We did not get any problem with the instrument and so all the period has been recorded every 10 minutes. The analysis of the datas will be led on different way. First, will be done a temporal study. We want to determine if the times when occur phenomenons are linked to the location in the town. We will interest particulary to the warming during the morning and the cooling during the evening. Then, we will look for correlation between the temperature and mixing ratio with the wind
Numerical results for near surface time domain electromagnetic exploration: a full waveform approach
NASA Astrophysics Data System (ADS)
Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.
2015-12-01
Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two
NASA Astrophysics Data System (ADS)
Henne, Stephan; Kaufmann, Pirmin; Schraner, Martin; Brunner, Dominik
2013-04-01
allows particles to leave the limited COSMO domain. On the technical side, we added an OpenMP shared-memory parallelisation to the model, which also allows for asynchronous reading of input data. Here we present results from several model performance tests under different conditions and compare these with results from standard FLEXPART simulations using nested ECMWF input. This analysis will contain evaluation of deposition fields, comparison of convection schemes and performance analysis of the parallel version. Furthermore, a series of forward-backward simulations were conducted in order to test the robustness of model results independent of the integration direction. Finally, selected examples from recent applications of the model to transport of radioactive and conservative tracers and for in-situ measurement characterisation will be presented.
Pham, VT.; Silva, L.; Digonnet, H.; Combeaud, C.; Billon, N.; Coupez, T.
2011-05-04
The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour of polymers through a fluid-structure coupling technique with a multiphase approach.
Active behavior of abdominal wall muscles: Experimental results and numerical model formulation.
Grasa, J; Sierra, M; Lauzeral, N; Muñoz, M J; Miana-Mena, F J; Calvo, B
2016-08-01
In the present study a computational finite element technique is proposed to simulate the mechanical response of muscles in the abdominal wall. This technique considers the active behavior of the tissue taking into account both collagen and muscle fiber directions. In an attempt to obtain the computational response as close as possible to real muscles, the parameters needed to adjust the mathematical formulation were determined from in vitro experimental tests. Experiments were conducted on male New Zealand White rabbits (2047±34g) and the active properties of three different muscles: Rectus Abdominis, External Oblique and multi-layered samples formed by three muscles (External Oblique, Internal Oblique, and Transversus Abdominis) were characterized. The parameters obtained for each muscle were incorporated into a finite strain formulation to simulate active behavior of muscles incorporating the anisotropy of the tissue. The results show the potential of the model to predict the anisotropic behavior of the tissue associated to fibers and how this influences on the strain, stress and generated force during an isometric contraction. PMID:27111629
Zlochiver, Sharon; Radai, M Michal; Abboud, Shimon; Rosenfeld, Moshe; Dong, Xiu-Zhen; Liu, Rui-Gang; You, Fu-Sheng; Xiang, Hai-Yan; Shi, Xue-Tao
2004-02-01
In electrical impedance tomography (EIT), measurements of developed surface potentials due to applied currents are used for the reconstruction of the conductivity distribution. Practical implementation of EIT systems is known to be problematic due to the high sensitivity to noise of such systems, leading to a poor imaging quality. In the present study, the performance of an induced current EIT (ICEIT) system, where eddy current is applied using magnetic induction, was studied by comparing the voltage measurements to simulated data, and examining the imaging quality with respect to simulated reconstructions for several phantom configurations. A 3-coil, 32-electrode ICEIT system was built, and an iterative modified Newton-Raphson algorithm was developed for the solution of the inverse problem. The RMS norm between the simulated and the experimental voltages was found to be 0.08 +/- 0.05 mV (<3%). Two regularization methods were implemented and compared: the Marquardt regularization and the Laplacian regularization (a bounded second-derivative regularization). While the Laplacian regularization method was found to be preferred for simulated data, it resulted in distinctive spatial artifacts for measured data. The experimental reconstructed images were found to be indicative of the angular positioning of the conductivity perturbations, though the radial sensitivity was low, especially when using the Marquardt regularization method. PMID:15005319
Restricted diffusion in a model acinar labyrinth by NMR: Theoretical and numerical results
NASA Astrophysics Data System (ADS)
Grebenkov, D. S.; Guillot, G.; Sapoval, B.
2007-01-01
A branched geometrical structure of the mammal lungs is known to be crucial for rapid access of oxygen to blood. But an important pulmonary disease like emphysema results in partial destruction of the alveolar tissue and enlargement of the distal airspaces, which may reduce the total oxygen transfer. This effect has been intensively studied during the last decade by MRI of hyperpolarized gases like helium-3. The relation between geometry and signal attenuation remained obscure due to a lack of realistic geometrical model of the acinar morphology. In this paper, we use Monte Carlo simulations of restricted diffusion in a realistic model acinus to compute the signal attenuation in a diffusion-weighted NMR experiment. We demonstrate that this technique should be sensitive to destruction of the branched structure: partial removal of the interalveolar tissue creates loops in the tree-like acinar architecture that enhance diffusive motion and the consequent signal attenuation. The role of the local geometry and related practical applications are discussed.
Buoyancy-driven melt segregation in the earth's moon. I - Numerical results
NASA Technical Reports Server (NTRS)
Delano, J. W.
1990-01-01
The densities of lunar mare magmas have been estimated at liquidus temperatures for pressures from 0 to 47 kbar (0.4 GPa; center of the moon) using a third-order Birch-Murnaghan equation and compositionally dependent parameters from Large and Carmichael (1987). Results on primary magmatic compositions represented by pristine volcanic glasses suggest that the density contrast between very-high-Ti melts and their liquidus olivines may approach zero at pressures of about 25 kbar (2.5 GPa). Since this is the pressure regime of the mantle source regions for these magmas, a compositional limit of eruptability for mare liquids may exist that is similar to the highest Ti melt yet observed among the lunar samples. Although the moon may have generated magmas having greater than 16.4 wt pct TiO2, those melts would probably not have reached the lunar surface due to their high densities, and may have even sunk deeper into the moon's interior as negatively buoyant diapirs. This process may have been important for assimilative interactions in the lunar mantle. The phenomenon of melt/solid density crossover may therefore occur not only in large terrestrial-type objects but also in small objects where, despite low pressures, the range of melt compositions is extreme.
NASA Astrophysics Data System (ADS)
Salcedo-Castro, Julio; Bourgault, Daniel; deYoung, Brad
2011-09-01
The flow caused by the discharge of freshwater underneath a glacier into an idealized fjord is simulated with a 2D non-hydrostatic model. As the freshwater leaves horizontally the subglacial opening into a fjord of uniformly denser water it spreads along the bottom as a jet, until buoyancy forces it to rise. During the initial rising phase, the plume meanders into complex flow patterns while mixing with the surrounding fluid until it reaches the surface and then spreads horizontally as a surface seaward flowing plume of brackish water. The process induces an estuarine-like circulation. Once steady-state is reached, the flow consists of an almost undiluted buoyant plume rising straight along the face of the glacier that turns into a horizontal surface layer thickening as it flows seaward. Over the range of parameters examined, the estuarine circulation is dynamically unstable with gradient Richardson number at the sheared interface having values of <1/4. The surface velocity and dilution factors are strongly and non-linearly related to the Froude number. It is the buoyancy flux that primarily controls the resulting circulation with the momentum flux playing a secondary role.
The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations
NASA Technical Reports Server (NTRS)
Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.
2003-01-01
We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.
Kam, Seung I.; Gauglitz, Phillip A. ); Rossen, William R.
2000-12-01
The goal of this study is to fit model parameters to changes in waste level in response to barometric pressure changes in underground storage tanks at the Hanford Site. This waste compressibility is a measure of the quantity of gas, typically hydrogen and other flammable gases that can pose a safety hazard, retained in the waste. A one-dimensional biconical-pore-network model for compressibility of a bubbly slurry is presented in a companion paper. Fitting these results to actual waste level changes in the tanks implies that bubbles are long in the slurry layer and the ratio of pore-body radius to pore-throat radius is close to one; unfortunately, capillary effects can not be quantified unambiguously from the data without additional information on pore geometry. Therefore determining the quantity of gas in the tanks requires more than just slurry volume data. Similar ambiguity also exists with two other simple models: a capillary-tube model with contact angle hysteresis and spherical-p ore model.
NASA Astrophysics Data System (ADS)
Pearson, A.; Pizzuto, J. E.
2015-12-01
Previous work at run-of-river (ROR) dams in northern Delaware has shown that bedload supplied to ROR impoundments can be transported over the dam when impoundments remain unfilled. Transport is facilitated by high levels of sand in the impoundment that lowers the critical shear stresses for particle entrainment, and an inversely sloping sediment ramp connecting the impoundment bed (where the water depth is typically equal to the dam height) with the top of the dam (Pearson and Pizzuto, in press). We demonstrate with one-dimensional bed material transport modeling that bed material can move through impoundments and that equilibrium transport (i.e., a balance between supply to and export from the impoundment, with a constant bed elevation) is possible even when the bed elevation is below the top of the dam. Based on our field work and previous HEC-RAS modeling, we assess bed material transport capacity at the base of the sediment ramp (and ignore detailed processes carrying sediment up and ramp and over the dam). The hydraulics at the base of the ramp are computed using a weir equation, providing estimates of water depth, velocity, and friction, based on the discharge and sediment grain size distribution of the impoundment. Bedload transport rates are computed using the Wilcock-Crowe equation, and changes in the impoundment's bed elevation are determined by sediment continuity. Our results indicate that impoundments pass the gravel supplied from upstream with deep pools when gravel supply rate is low, gravel grain sizes are relatively small, sand supply is high, and discharge is high. Conversely, impoundments will tend to fill their pools when gravel supply rate is high, gravel grain sizes are relatively large, sand supply is low, and discharge is low. The rate of bedload supplied to an impoundment is the primary control on how fast equilibrium transport is reached, with discharge having almost no influence on the timing of equilibrium.
NASA Astrophysics Data System (ADS)
Radhakrishnan, Sreeram
Harbor observation and prediction system (NYHOPS) which provides 48-hour forecasts of salinity and temperature profiles. Initial results indicate that the NYHOPS forecast of sound speed profiles used in conjunction with the acoustic propagation model is able to make realistic forecasts of TL in the Hudson River Estuary.
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Peck, Jeffrey A.
1992-01-01
Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.
Numerical Modeling of Anti-icing Systems and Comparison to Test Results on a NACA 0012 Airfoil
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Potapczuk, Mark G.
1993-01-01
A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experiment results were generally obtained for the surface temperature and the possibility for each runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.
NASA Astrophysics Data System (ADS)
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem
2015-10-01
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
Shazlee, Muhammad Kashif; Ali, Muhammad; SaadAhmed, Muhammad; Hussain, Ammad; Hameed, Kamran; Lutfi, Irfan Amjad; Khan, Muhammad Tahir
2016-01-01
Objective: To study the diagnostic accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma. Methods: A total of 61 patients with 63 ocular injuries were assessed during July 2013 to January 2014. All patients were referred to the department of Radiology from Emergency Room since adequate clinical assessment of the fundus was impossible because of the presence of opaque ocular media. Based on radiological diagnosis, the patients were provided treatment (surgical or medical). Clinical diagnosis was confirmed during surgical procedures or clinical follow-up. Results: A total of 63 ocular injuries were examined in 61 patients. The overall sensitivity was 91.5%, Specificity was 98.87%, Positive predictive value was 87.62 and Negative predictive value was 99%. Conclusion: Ultrasound B-scan is a sensitive, non invasive and rapid way of assessing intraocular damage caused by blunt or penetrating eye injuries. PMID:27182245
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
NASA Astrophysics Data System (ADS)
Wu, Yang; Kelly, Damien P.
2014-12-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.
Wu, Yang; Kelly, Damien P.
2014-01-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf’s treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of Un and Vn type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of Δρ and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by 2πm/Δρ, where m is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system. PMID
NASA Astrophysics Data System (ADS)
Baharun, A. Tarmizi; Maimun, Adi; Ahmed, Yasser M.; Mobassher, M.; Nakisa, M.
2015-05-01
In this paper, three dimensional data and behavior of incompressible and steady air flow around a small scale Wing in Ground Effect Craft (WIG) were investigated and studied numerically then compared to the experimental result and also published data. This computational simulation (CFD) adopted two turbulence models, which were k-ɛ and k-ω in order to determine which model produces minimum difference to the experimental result of the small scale WIG tested in wind tunnel. Unstructured mesh was used in the simulation and data of drag coefficient (Cd) and lift coefficient (Cl) were obtained with angle of attack (AoA) of the WIG model as the parameter. Ansys ICEM was used for the meshing process while Ansys Fluent was used for solution. Aerodynamic forces, Cl, Cd and Cl/Cd along with fluid flow pattern of the small scale WIG craft was shown and discussed.
Meyer, H. O.
The PINTEX group studied proton-proton and proton-deuteron scattering and reactions between 100 and 500 MeV at the Indiana University Cyclotron Facility (IUCF). More than a dozen experiments made use of electron-cooled polarized proton or deuteron beams, orbiting in the 'Indiana Cooler' storage ring, and of a polarized atomic-beam target of hydrogen or deuterium in the path of the stored beam. The collaboration involved researchers from several midwestern universities, as well as a number of European institutions. The PINTEX program ended when the Indiana Cooler was shut down in August 2002. The website contains links to some of the numerical results, descriptions of experiments, and a complete list of publications resulting from PINTEX.
NASA Astrophysics Data System (ADS)
Fontana, A.; Marzari, F.
2016-05-01
Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to
Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H
2013-01-01
High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to
NASA Astrophysics Data System (ADS)
Sanz-Enguita, G.; Ortega, J.; Folcia, C. L.; Aramburu, I.; Etxebarria, J.
2016-02-01
We have studied the performance characteristics of a dye-doped cholesteric liquid crystal (CLC) laser as a function of the sample thickness. The study has been carried out both from the experimental and theoretical points of view. The theoretical model is based on the kinetic equations for the population of the excited states of the dye and for the power of light generated within the laser cavity. From the equations, the threshold pump radiation energy Eth and the slope efficiency η are numerically calculated. Eth is rather insensitive to thickness changes, except for small thicknesses. In comparison, η shows a much more pronounced variation, exhibiting a maximum that determines the sample thickness for optimum laser performance. The predictions are in good accordance with the experimental results. Approximate analytical expressions for Eth and η as a function of the physical characteristics of the CLC laser are also proposed. These expressions present an excellent agreement with the numerical calculations. Finally, we comment on the general features of CLC layer and dye that lead to the best laser performance.
Monsanglant, C.; Audi, G.; Conreur, G.; Cousin, R.; Doubre, H.; Jacotin, M.; Henry, S.; Kepinski, J.-F.; Lunney, D.; Saint Simon, M. de; Thibault, C.; Toader, C.; Bollen, G.; Lebee, G.; Scheidenberger, C.; Borcea, C.; Duma, M.; Kluge, H.-J.; Le Scornet, G.
1999-11-16
MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na, Mg, Al, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Todesco, M.; Neri, A.; Esposti Ongaro, T.; Tola, E.; Rocco, G.
2011-12-01
We present a new DVD of the INGV outreach series, aimed at illustrating our research work on pyroclastic flow modeling. Pyroclastic flows (or pyroclastic density currents) are hot, devastating clouds of gas and ashes, generated during explosive eruptions. Understanding their dynamics and impact is crucial for a proper hazard assessment. We employ a 3D numerical model which describes the main features of the multi-phase and multi-component process, from the generation of the flows to their propagation along complex terrains. Our numerical results can be translated into color animations, which describe the temporal evolution of flow variables such as temperature or ash concentration. The animations provide a detailed and effective description of the natural phenomenon which can be used to present this geological process to a general public and to improve the hazard perception in volcanic areas. In our DVD, the computer animations are introduced and commented by professionals and researchers who deals at various levels with the study of pyroclastic flows and their impact. Their comments are taken as short interviews, mounted in a short video (about 10 minutes), which describes the natural process, as well as the model and its applications to some explosive volcanoes like Vesuvio, Campi Flegrei, Mt. St. Helens and Soufriere Hills (Montserrat). The ensemble of different voices and faces provides a direct sense of the multi-disciplinary effort involved in the assessment of pyroclastic flow hazard. The video also introduces the people who address this complex problem, and the personal involvement beyond the scientific results. The full, uncommented animations of the pyroclastic flow propagation on the different volcanic settings are also provided in the DVD, that is meant to be a general, flexible outreach tool.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less
G. L. Hawkes; J. E. O'Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna
2008-06-01
A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
NASA Astrophysics Data System (ADS)
Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.
2015-12-01
The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.
Sideri, Mario; Garutti, Paola; Costa, Silvano; Cristiani, Paolo; Schincaglia, Patrizia; Sassoli de Bianchi, Priscilla; Naldoni, Carlo; Bucchi, Lauro
2015-01-01
Purpose. To report the accuracy of colposcopically directed biopsy in an internet-based colposcopy quality assurance programme in northern Italy. Methods. A web application was made accessible on the website of the regional Administration. Fifty-nine colposcopists out of the registered 65 logged in, viewed a posted set of 50 digital colpophotographs, classified them for colposcopic impression and need for biopsy, and indicated the most appropriate site for biopsy with a left-button mouse click on the image. Results. Total biopsy failure rate, comprising both nonbiopsy and incorrect selection of biopsy site, was 0.20 in CIN1, 0.11 in CIN2, 0.09 in CIN3, and 0.02 in carcinoma. Errors in the selection of biopsy site were stable between 0.08 and 0.09 in the three grades of CIN while decreasing to 0.01 in carcinoma. In multivariate analysis, the risk of incorrect selection of biopsy site was 1.97 for CIN2, 2.52 for CIN3, and 0.29 for carcinoma versus CIN1. Conclusions. Although total biopsy failure rate decreased regularly with increasing severity of histological diagnosis, the rate of incorrect selection of biopsy site was stable up to CIN3. In multivariate analysis, CIN2 and CIN3 had an independently increased risk of incorrect selection of biopsy site. PMID:26180805
NASA Technical Reports Server (NTRS)
Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.
2003-01-01
A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.
NASA Astrophysics Data System (ADS)
Hand, J. W.; Li, Y.; Hajnal, J. V.
2010-02-01
Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB <= 2 W kg-1 (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR10g and average foetal temperature are within international safety limits. For continuous RF exposure at SARMWB = 2 W kg-1 over periods of 7.5 min or longer, a maximum local foetal temperature >38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.
Ermolaev, B.S.; Novozhilov, B.V.; Posvyanskii, V.S.; Sulimov, A.A.
1986-03-01
The authors analyze the results of a numerical simulation of the convective burning of explosive powders in the presence of increasing pressure. The formulation of the problem reproduces a typical experimental technique: a strong closed vessel with a channel uniformly filled with the explosive investigated is fitted with devices for initiating and recording the process of explosion. It is shown that the relation between the propagation velocities of the flame and the compression waves in the powder and the rate of pressure increase in the combustion zone is such that a narrow compaction zone is formed ahead of the ignition front. Another important result is obtained by analyzing the difference between the flame velocity and the gas flow velocity in the ignition front. A model of the process is given. The results of the investigation throw light on such aspects of the convective combustion mechanism and the transition from combustion to detonation as the role of compaction of the explosive in the process of flame propogation and the role of the rate of pressure increase and dissipative heating of the gas phase in the pores ahead of the ignition front.
On the accuracy of ERS-1 orbit predictions
NASA Technical Reports Server (NTRS)
Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.
1993-01-01
Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.
C. Monsanglant; C. Toader; G. Audi; G. Bollen; C. Borcea; G. Conreur; R. Cousin; H. Doubre; M. Duma; M. Jacotin; S. Henry; J.-F. Kepinski; H.-J. Kluge; G. Lebee; G. Le Scornet; D. Lunney; M. de Saint Simon; C. Scheidenberger; C. Thibault
1999-12-31
MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na{clubsuit}, Mg, Al{clubsuit}, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.
Prexl, A.; Hoffmann, H.; Golle, M.; Kudrass, S.; Wahl, M.
2011-01-17
Springback prediction and compensation is nowadays a widely recommended discipline in finite element modeling. Many researches have shown an improvement of the accuracy in prediction of springback using advanced modeling techniques, e.g. by including the Bauschinger effect. In this work different models were investigated in the commercial simulation program AutoForm for a large series production part, manufactured from the dual phase steel HC340XD. The work shows the differences between numerical drawbead models and geometrically modeled drawbeads. Furthermore, a sensitivity analysis was made for a reduced kinematic hardening model, implemented in the finite element program AutoForm.
NASA Astrophysics Data System (ADS)
Prexl, A.; Golle, M.; Hoffmann, H.; Kudraß, S.; Wahl, M.
2011-01-01
Springback prediction and compensation is nowadays a widely recommended discipline in finite element modeling. Many researches have shown an improvement of the accuracy in prediction of springback using advanced modeling techniques, e.g. by including the Bauschinger effect. In this work different models were investigated in the commercial simulation program AutoForm for a large series production part, manufactured from the dual phase steel HC340XD. The work shows the differences between numerical drawbead models and geometrically modeled drawbeads. Furthermore, a sensitivity analysis was made for a reduced kinematic hardening model, implemented in the finite element program AutoForm.
NASA Technical Reports Server (NTRS)
Peltier, L. J.; Biringen, S.
1993-01-01
The present numerical simulation explores a thermal-convective mechanism for oscillatory thermocapillary convection in a shallow Cartesian cavity for a Prandtl number 6.78 fluid. The computer program developed for this simulation integrates the two-dimensional, time-dependent Navier-Stokes equations and the energy equation by a time-accurate method on a stretched, staggered mesh. Flat free surfaces are assumed. The instability is shown to depend upon temporal coupling between large scale thermal structures within the flow field and the temperature sensitive free surface. A primary result of this study is the development of a stability diagram presenting the critical Marangoni number separating steady from the time-dependent flow states as a function of aspect ratio for the range of values between 2.3 and 3.8. Within this range, a minimum critical aspect ratio near 2.3 and a minimum critical Marangoni number near 20,000 are predicted below which steady convection is found.
Dvir, Hila; Zlochiver, Sharon
2015-01-01
A single isolated sinoatrial pacemaker cell presents intrinsic interbeat interval (IBI) variability that is believed to result from the stochastic characteristics of the opening and closing processes of membrane ion channels. To our knowledge, a novel mathematical framework was developed in this work to address the effect of current fluctuations on the IBIs of sinoatrial pacemaker cells. Using statistical modeling and employing the Fokker-Planck formalism, our mathematical analysis suggests that increased stochastic current fluctuation variance linearly increases the slope of phase-4 depolarization, hence the rate of activations. Single-cell and two-dimensional computerized numerical modeling of the sinoatrial node was conducted to validate the theoretical predictions using established ionic kinetics of the rabbit pacemaker and atrial cells. Our models also provide, to our knowledge, a novel complementary or alternative explanation to recent experimental observations showing a strong reduction in the mean IBI of Cx30 deficient mice in comparison to wild-types, not fully explicable by the effects of intercellular decoupling. PMID:25762340
2010-01-01
Background The mitosporic fungus Trichoderma harzianum (Hypocrea, Ascomycota, Hypocreales, Hypocreaceae) is an ubiquitous species in the environment with some strains commercially exploited for the biological control of plant pathogenic fungi. Although T. harzianum is asexual (or anamorphic), its sexual stage (or teleomorph) has been described as Hypocrea lixii. Since recombination would be an important issue for the efficacy of an agent of the biological control in the field, we investigated the phylogenetic structure of the species. Results Using DNA sequence data from three unlinked loci for each of 93 strains collected worldwide, we detected a complex speciation process revealing overlapping reproductively isolated biological species, recent agamospecies and numerous relict lineages with unresolved phylogenetic positions. Genealogical concordance and recombination analyses confirm the existence of two genetically isolated agamospecies including T. harzianum sensu stricto and two hypothetical holomorphic species related to but different from H. lixii. The exact phylogenetic position of the majority of strains was not resolved and therefore attributed to a diverse network of recombining strains conventionally called 'pseudoharzianum matrix'. Since H. lixii and T. harzianum are evidently genetically isolated, the anamorph - teleomorph combination comprising H. lixii/T. harzianum in one holomorph must be rejected in favor of two separate species. Conclusions Our data illustrate a complex speciation within H. lixii - T. harzianum species group, which is based on coexistence and interaction of organisms with different evolutionary histories and on the absence of strict genetic borders between them. PMID:20359347
NASA Astrophysics Data System (ADS)
Chirkov, V. A.; Komarov, D. K.; Stishkov, Y. K.; Vasilkov, S. A.
2015-10-01
The paper studies a particular electrode system, two flat parallel electrodes with a dielectric plate having a small circular hole between them. Its main feature is that the region of the strong electric field is located far from metal electrode surfaces, which permits one to preclude the injection charge formation and to observe field-enhanced dissociation (the Wien effect) leading to the emergence of electrohydrodynamic (EHD) flow. The described electrode system was studied by way of both computer simulation and experiment. The latter was conducted with the help of the particle image velocimetry (or PIV) technique. The numerical research used trusted software package COMSOL Multiphysics, which allows solving the complete set of EHD equations and obtaining the EHD flow structure. Basing on the computer simulation and the comparison with experimental investigation results, it was concluded that the Wien effect is capable of causing intense (several centimeters per second) EHD flows in low-conducting liquids and has to be taken into account when dealing with EHD devices.
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Dvir, Hila; Zlochiver, Sharon
2015-03-10
A single isolated sinoatrial pacemaker cell presents intrinsic interbeat interval (IBI) variability that is believed to result from the stochastic characteristics of the opening and closing processes of membrane ion channels. To our knowledge, a novel mathematical framework was developed in this work to address the effect of current fluctuations on the IBIs of sinoatrial pacemaker cells. Using statistical modeling and employing the Fokker-Planck formalism, our mathematical analysis suggests that increased stochastic current fluctuation variance linearly increases the slope of phase-4 depolarization, hence the rate of activations. Single-cell and two-dimensional computerized numerical modeling of the sinoatrial node was conducted to validate the theoretical predictions using established ionic kinetics of the rabbit pacemaker and atrial cells. Our models also provide, to our knowledge, a novel complementary or alternative explanation to recent experimental observations showing a strong reduction in the mean IBI of Cx30 deficient mice in comparison to wild-types, not fully explicable by the effects of intercellular decoupling. PMID:25762340
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
NASA Astrophysics Data System (ADS)
Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina
2012-03-01
Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.
NASA Astrophysics Data System (ADS)
Valla, Pierre G.; van der Beek, Peter A.; Lague, Dimitri; Carcaillet, Julien
2010-05-01
Bedrock gorges are frequent features in glacial or post-glacial landscapes and allow measurements of fluvial bedrock incision in mountainous relief. Using digital elevation models, aerial photographs, topographic maps and field reconnaissance in the Pelvoux-Ecrins Massif (French Western Alps), we have identified ~30 tributary hanging valleys incised by gorges toward their confluence with the trunk streams. Longitudinal profiles of these tributaries are all convex and have abrupt knickpoints at the upper limit of oversteepened gorge reaches. From morphometric analyses, we find that mean channel gradients and widths, as well as knickpoint retreat rates, display a drainage-area dependence modulated by bedrock lithology. However, there appears to be no relation between horizontal retreat and vertical downwearing of knickpoints. Numerical modeling has been performed to test the capacity of different fluvial incision models to predict the inferred evolution of the gorges. Results from simple end-member models suggest transport-limited behavior of the bedrock gorges. Using a more sophisticated model including dynamic width adjustment and sediment-dependent incision rates, we show that bedrock gorge evolution requires significant supply of sediment from the gorge sidewalls triggered by gorge deepening, combined with pronounced inhibition of bedrock incision by sediment transport and deposition. We then use in-situ produced 10Be cosmogenic nuclides to date and quantify bedrock gorge incision into a single glacial hanging valley (Gorge du Diable). We have sampled gorge sidewalls and the active channel bed to derive both long-term and present-day incision rates. 10Be ages of sidewall profiles reveal rapid incision through the late Holocene (ca 5 ka), implying either delayed initiation of gorge incision after final ice retreat from internal Alpine valleys at ca 12 ka, or post-glacial surface reburial of the gorge. Both modeling results and cosmogenic dating suggest that
Elcner, Jakub; Lizal, Frantisek; Jedelsky, Jan; Jicha, Miroslav; Chovancova, Michaela
2016-04-01
In this article, the results of numerical simulations using computational fluid dynamics (CFD) and a comparison with experiments performed with phase Doppler anemometry are presented. The simulations and experiments were conducted in a realistic model of the human airways, which comprised the throat, trachea and tracheobronchial tree up to the fourth generation. A full inspiration/expiration breathing cycle was used with tidal volumes 0.5 and 1 L, which correspond to a sedentary regime and deep breath, respectively. The length of the entire breathing cycle was 4 s, with inspiration and expiration each lasting 2 s. As a boundary condition for the CFD simulations, experimentally obtained flow rate distribution in 10 terminal airways was used with zero pressure resistance at the throat inlet. CCM+ CFD code (Adapco) was used with an SST k-[Formula: see text] low-Reynolds Number RANS model. The total number of polyhedral control volumes was 2.6 million with a time step of 0.001 s. Comparisons were made at several points in eight cross sections selected according to experiments in the trachea and the left and right bronchi. The results agree well with experiments involving the oscillation (temporal relocation) of flow structures in the majority of the cross sections and individual local positions. Velocity field simulation in several cross sections shows a very unstable flow field, which originates in the tracheal laryngeal jet and propagates far downstream with the formation of separation zones in both left and right airways. The RANS simulation agrees with the experiments in almost all the cross sections and shows unstable local flow structures and a quantitatively acceptable solution for the time-averaged flow field. PMID:26163996
NASA Astrophysics Data System (ADS)
Alhammoud, B.; Béranger, K.; Mortier, L.; Crépon, M.
The Eastern Mediterranean hydrology and circulation are studied by comparing the results of a high resolution primitive equation model (described in dedicated session: Béranger et al.) with observations. The model has a horizontal grid mesh of 1/16o and 43 z-levels in the vertical. The model was initialized with the MODB5 climatology and has been forced during 11 years by the daily sea surface fluxes provided by the European Centre for Medium-range Weather Forecasts analysis in a perpetual year mode corresponding to the year March 1998-February 1999. At the end of the run, the numerical model is able to accurately reproduce the major water masses of the Eastern Mediterranean Basin (Levantine Surface Water, modi- fied Atlantic Water, Levantine Intermediate Water, and Eastern Mediterranean Deep Water). Comparisons with the POEM observations reveal good agreement. While the initial conditions of the model are somewhat different from POEM observations, dur- ing the last year of the simulation, we found that the water mass stratification matches that of the observations quite well in the seasonal mean. During the 11 years of simulation, the model drifts slightly in the layers below the thermocline. Nevertheless, many important physical processes were reproduced. One example is that the dispersal of Adriatic Deep Water into the Levantine Basin is rep- resented. In addition, convective activity located in the northern part of the Levantine Basin occurs in Spring as expected. The surface circulation is in agreement with in-situ and satellite observations. Some well known mesoscale features of the upper thermocline circulation are shown. Sea- sonal variability of transports through Sicily, Otranto and Cretan straits are inves- tigated as well. This work was supported by the french MERCATOR project and SHOM.
Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish
2007-01-01
We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui. PMID:18693974
NASA Astrophysics Data System (ADS)
Herring, Jeannette L.; Maurer, Calvin R., Jr.; Muratore, Diane M.; Galloway, Robert L., Jr.; Dawant, Benoit M.
1999-05-01
This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
NASA Astrophysics Data System (ADS)
Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.
2015-06-01
Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an
E. L. Tolman S. N. Aksan
1981-10-01
Nine boil-off experiments were conducted in the Swiss NEPTUN Facility primarily to obtain experimental data for assessing the perturbation effects of LOFT thermocouples during simulated small-break core uncovery conditions. The data will also be useful in assessing computer model capability to predict thermal hydraulic response data for this type of experiment. System parameters that were varied for these experiments included heater rod power, system pressure, and initial coolant subcooling. The experiments showed that the LOFT thermocouples do not cause a significant cooling influence in the rods to which they are attached. Furthermore, the accuracy of the LOFT thermocouples is within 20 K at the peak cladding temperature zone.
NASA Astrophysics Data System (ADS)
Declair, Stefan; Stephan, Klaus; Potthast, Roland
2015-04-01
Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs). In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology strongly support the TSOs by developing innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key in the energy prediction process chain is the numerical weather prediction (NWP) system. With focus on wind energy, we face the model errors in the planetary boundary layer, which is characterized by strong spatial and temporal fluctuations in wind speed, to improve the basis of the weather dependent renewable energy prediction. Model data can be corrected by postprocessing techniques such as model output statistics and calibration using historical observational data. On the other hand, latest observations can be used in a preprocessing technique called data assimilation (DA). In DA, the model output from a previous time step is combined such with observational data, that the new model data for model integration initialization (analysis) fits best to the latest model data and the observational data as well. Therefore, model errors can be already reduced before the model integration. In this contribution, the results of an impact study are presented. A so-called OSSE (Observation Simulation System Experiment) is performed using the convective-resoluted COSMO-DE model of the German Weather Service and a 4D-DA technique, a Newtonian relaxation method also called nudging. Starting from a nature run (treated as the truth), conventional observations and artificial wind observations at hub height are generated. In a control run, the basic model setup of the nature run is slightly perturbed to drag the model away from the beforehand generated truth and a free forecast is computed based on the analysis using only conventional
Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis
NASA Technical Reports Server (NTRS)
Slojkowski, Steven E.
2014-01-01
Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.
NASA Astrophysics Data System (ADS)
van Aalsburg, Jordan; Rundle, John B.; Grant, Lisa B.; Rundle, Paul B.; Yakovlev, Gleb; Turcotte, Donald L.; Donnellan, Andrea; Tiampo, Kristy F.; Fernandez, Jose
2010-08-01
In weather forecasting, current and past observational data are routinely assimilated into numerical simulations to produce ensemble forecasts of future events in a process termed "model steering". Here we describe a similar approach that is motivated by analyses of previous forecasts of the Working Group on California Earthquake Probabilities (WGCEP). Our approach is adapted to the problem of earthquake forecasting using topologically realistic numerical simulations for the strike-slip fault system in California. By systematically comparing simulation data to observed paleoseismic data, a series of spatial probability density functions (PDFs) can be computed that describe the probable locations of future large earthquakes. We develop this approach and show examples of PDFs associated with magnitude M > 6.5 and M > 7.0 earthquakes in California.
NASA Astrophysics Data System (ADS)
Sprenger, Lisa; Lange, Adrian; Odenbach, Stefan
2013-12-01
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (ST) of 0.16 K-1. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K-1 in the analytical case and 0.29 K-1 in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
NASA Astrophysics Data System (ADS)
Raghavan, V.; Whitney, Scott E.; Ebmeier, Ryan J.; Padhye, Nisha V.; Nelson, Michael; Viljoen, Hendrik J.; Gogos, George
2006-09-01
In this article, experimental and numerical analyses to investigate the thermal control of an innovative vortex tube based polymerase chain reaction (VT-PCR) thermocycler are described. VT-PCR is capable of rapid DNA amplification and real-time optical detection. The device rapidly cycles six 20μl 96bp λ-DNA samples between the PCR stages (denaturation, annealing, and elongation) for 30cycles in approximately 6min. Two-dimensional numerical simulations have been carried out using computational fluid dynamics (CFD) software FLUENT v.6.2.16. Experiments and CFD simulations have been carried out to measure/predict the temperature variation between the samples and within each sample. Heat transfer rate (primarily dictated by the temperature differences between the samples and the external air heating or cooling them) governs the temperature distribution between and within the samples. Temperature variation between and within the samples during the denaturation stage has been quite uniform (maximum variation around ±0.5 and 1.6°C, respectively). During cooling, by adjusting the cold release valves in the VT-PCR during some stage of cooling, the heat transfer rate has been controlled. Improved thermal control, which increases the efficiency of the PCR process, has been obtained both experimentally and numerically by slightly decreasing the rate of cooling. Thus, almost uniform temperature distribution between and within the samples (within 1°C) has been attained for the annealing stage as well. It is shown that the VT-PCR is a fully functional PCR machine capable of amplifying specific DNA target sequences in less time than conventional PCR devices.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2013-12-15
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
Accuracy considerations in the computational analysis of jet noise
NASA Technical Reports Server (NTRS)
Scott, James N.
1993-01-01
The application of computational fluid dynamics methods to the analysis of problems in aerodynamic noise has resulted in the extension and adaptation of conventional CFD to the discipline now referred to as computational aeroacoustics (CAA). In the analysis of jet noise accurate resolution of a wide range of spatial and temporal scales in the flow field is essential if the acoustic far field is to be predicted. The numerical simulation of unsteady jet flow has been successfully demonstrated and many flow features have been computed with reasonable accuracy. Grid refinement and increased solution time are discussed as means of improving accuracy of Navier-Stokes solutions of unsteady jet flow. In addition various properties of different numerical procedures which influence accuracy are examined with particular emphasis on dispersion and dissipation characteristics. These properties are investigated by using selected schemes to solve model problems for the propagation of a shock wave and a sinusoidal disturbance. The results are compared for the different schemes.
NASA Astrophysics Data System (ADS)
Malamataris, Nikolaos; Liakos, Anastasios
2015-11-01
The exact value of the Reynolds number regarding the inception of separation in the flow around a circular cylinder is still a matter of research. This work connects the inception of separation with the calculation of a positive pressure gradient around the circumference of the cylinder. The hypothesis is that inception of separation occurs when the pressure gradient becomes positive around the circumference. From the most cited laboratory experiments that have dealt with that subject of inception of separation only Thom has measured the pressure gradient there at very low Reynolds numbers (up to Re=3.5). For this reason, the experimental conditions of his tunnel are simulated in a new numerical experiment. The full Navier Stokes equations in both two and three dimensions are solved with a home made code that utilizes Galerkin finite elements. In the two dimensional numerical experiment, inception of separation is observed at Re=4.3, which is the lowest Reynolds number where inception has been reported computationally. Currently, the three dimensional experiment is under way, in order to compare if there are effects of three dimensional theory of separation in the conditions of Thom's experiments.
Numerical simulation of small perturbation transonic flows
NASA Technical Reports Server (NTRS)
Seebass, A. R.; Yu, N. J.
1976-01-01
The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.
Developing a Weighted Measure of Speech Sound Accuracy
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2010-01-01
Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344
Higher-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
Quarini, G L; Learmonth, I D; Gheduzzi, S
2006-07-01
Acrylic cements are commonly used to attach prosthetic components in joint replacement surgery. The cements set in short periods of time by a complex polymerization of initially liquid monomer compounds into solid structures with accompanying significant heat release. Two main problems arise from this form of fixation: the first is the potential damage caused by the temperature excursion, and the second is incomplete reaction leaving active monomer compounds, which can potentially be slowly released into the patient. This paper presents a numerical model predicting the temperature-time history in an idealized prosthetic-cement-bone system. Using polymerization kinetics equations from the literature, the degree of polymerization is predicted, which is found to be very dependent on the thermal history of the setting process. Using medical literature, predictions for the degree of thermal bone necrosis are also made. The model is used to identify the critical parameters controlling thermal and unreacted monomer distributions. PMID:16898219
NASA Technical Reports Server (NTRS)
Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.
1989-01-01
Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.
Accuracy of deception judgments.
Bond, Charles F; DePaulo, Bella M
2006-01-01
We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438
On the Spatial and Temporal Accuracy of Overset Grid Methods for Moving Body Problems
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
A study of numerical attributes peculiar to an overset grid approach to unsteady aerodynamics prediction is presented. Attention is focused on the effect of spatial error associated with interpolation of intergrid boundary conditions and temporal error associated with explicit update of intergrid boundary points on overall solution accuracy. A set of numerical experiments are used to verify whether or not the use of simple interpolation for intergrid boundary conditions degrades the formal accuracy of a conventional second-order flow solver, and to quantify the error associated with explicit updating of intergrid boundary points. Test conditions correspond to the transonic regime. The validity of the numerical results presented here are established by comparison with existing numerical results of documented accuracy, and by direct comparison with experimental results.
NASA Astrophysics Data System (ADS)
Losiak, Anna; Czechowski, Leszek; Velbel, Michael A.
2015-12-01
Gypsum, a mineral that requires water to form, is common on the surface of Mars. Most of it originated before 3.5 Gyr when the Red Planet was more humid than now. However, occurrences of gypsum dune deposits around the North Polar Residual Cap (NPRC) seem to be surprisingly young: late Amazonian in age. This shows that liquid water was present on Mars even at times when surface conditions were as cold and dry as the present-day. A recently proposed mechanism for gypsum formation involves weathering of dust within ice (e.g., Niles, P.B., Michalski, J. [2009]. Nat. Geosci. 2, 215-220.). However, none of the previous studies have determined if this process is possible under current martian conditions. Here, we use numerical modelling of heat transfer to show that during the warmest days of the summer, solar irradiation may be sufficient to melt pure water ice located below a layer of dark dust particles (albedo ⩽ 0.13) lying on the steepest sections of the equator-facing slopes of the spiral troughs within martian NPRC. During the times of high irradiance at the north pole (every 51 ka; caused by variation of orbital and rotational parameters of Mars e.g., Laskar, J. et al. [2002]. Nature 419, 375-377.) this process could have taken place over larger parts of the spiral troughs. The existence of small amounts of liquid water close to the surface, even under current martian conditions, fulfils one of the main requirements necessary to explain the formation of the extensive gypsum deposits around the NPRC. It also changes our understanding of the degree of current geological activity on Mars and has important implications for estimating the astrobiological potential of Mars.
ERIC Educational Resources Information Center
Goold, Vernell C.
1977-01-01
Numerical control (a technique involving coded, numerical instructions for the automatic control and performance of a machine tool) does not replace fundamental machine tool training. It should be added to the training program to give the student an additional tool to accomplish production rates and accuracy that were not possible before. (HD)
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
NASA Astrophysics Data System (ADS)
Wildman, R. D.; Jenkins, J. T.; Krouskop, P. E.; Talbot, J.
2006-07-01
A comparison of the predictions of a simple kinetic theory with experimental and numerical results for a vibrated granular bed consisting of nearly elastic particles of two sizes has been performed. The results show good agreement between the data sets for a range of numbers of each size of particle, and are particularly good for particle beds containing similar proportions of each species. The agreement suggests that such a model may be a good starting point for describing polydisperse systems of granular flows.
Schäfer, Dirk; Köber, Ralf; Dahmke, Andreas
2003-09-01
The successful dechlorination of mixtures of chlorinated hydrocarbons with zero-valent metals requires information concerning the kinetics of simultaneous degradation of different contaminants. This includes intraspecies competitive effects (loading of the reactive iron surface by a single contaminant) as well as interspecies competition of several contaminants for the reactive sites available. In columns packed with zero-valent iron, the degradation behaviour of trichloroethylene (TCE), cis-dichloroethylene (DCE) and mixtures of both was measured in order to investigate interspecies competition. Although a decreasing rate of dechlorination is to be expected, when several degradable substances compete for the reactive sites on the iron surface, TCE degradation is nearly unaffected by the presence of cis-DCE. In contrast, cis-DCE degradation rates decrease significantly when TCE is added. A new modelling approach is developed in order to identify and quantify the observed competitive effects. The numerical model TBC (Transport, Biochemistry and Chemistry, Schäfer et al., 1998a) is used to describe adsorption, desorption and dechlorination in a mechanistic way. Adsorption and degradation of a contaminant based on a limited number of reactive sites leads to a combined zero- and first-order degradation kinetics for high and low concentrations, respectively. The adsorption of several contaminants with different sorption parameters to a limited reactive surface causes interspecies competition. The reaction scheme and the parameters required are successfully transferred from Arnold and Roberts (2000b) to the model TBC. The degradation behaviour of the mixed contamination observed in the column experiments can be related to the adsorption properties of TCE and cis-DCE. By predicting the degradation of the single substances TCE and cis-DCE as well as mixtures of both, the calibrated model is used to investigate the effects of interspecies competition on the design of
NASA Astrophysics Data System (ADS)
Schäfer, Dirk; Köber, Ralf; Dahmke, Andreas
2003-09-01
The successful dechlorination of mixtures of chlorinated hydrocarbons with zero-valent metals requires information concerning the kinetics of simultaneous degradation of different contaminants. This includes intraspecies competitive effects (loading of the reactive iron surface by a single contaminant) as well as interspecies competition of several contaminants for the reactive sites available. In columns packed with zero-valent iron, the degradation behaviour of trichloroethylene (TCE), cis-dichloroethylene (DCE) and mixtures of both was measured in order to investigate interspecies competition. Although a decreasing rate of dechlorination is to be expected, when several degradable substances compete for the reactive sites on the iron surface, TCE degradation is nearly unaffected by the presence of cis-DCE. In contrast, cis-DCE degradation rates decrease significantly when TCE is added. A new modelling approach is developed in order to identify and quantify the observed competitive effects. The numerical model TBC (Transport, Biochemistry and Chemistry, Schäfer et al., 1998a) is used to describe adsorption, desorption and dechlorination in a mechanistic way. Adsorption and degradation of a contaminant based on a limited number of reactive sites leads to a combined zero- and first-order degradation kinetics for high and low concentrations, respectively. The adsorption of several contaminants with different sorption parameters to a limited reactive surface causes interspecies competition. The reaction scheme and the parameters required are successfully transferred from Arnold and Roberts (2000b) to the model TBC. The degradation behaviour of the mixed contamination observed in the column experiments can be related to the adsorption properties of TCE and cis-DCE. By predicting the degradation of the single substances TCE and cis-DCE as well as mixtures of both, the calibrated model is used to investigate the effects of interspecies competition on the design of
NASA Astrophysics Data System (ADS)
Rawat, A.; Aucan, J.; Ardhuin, F.
2012-12-01
All sea level variations of the order of 1 cm at scales under 30 km are of great interest for the future Surface Water Ocean Topography (SWOT) satellite mission. That satellite should provide high-resolution maps of the sea surface height for analysis of meso to sub-mesoscale currents, but that will require a filtering of all gravity wave motions in the data. Free infragravity waves (FIGWs) are generated and radiate offshore when swells and/or wind seas and their associated bound infragravity waves impact exposed coastlines. Free infragravity waves have dominant periods comprised between 1 and 10 minutes and horizontal wavelengths of up to tens of kilometers. Given the length scales of the infragravity waves wavelength and amplitude, the infragravity wave field will can a significant fraction the signal measured by the future SWOT mission. In this study, we analyze the data from recovered bottom pressure recorders of the Deep-ocean Assessment and Reporting of Tsunami (DART) program. This analysis includes data spanning several years between 2006 and 2010, from stations at different latitudes in the North and South Pacific, the North Atlantic, the Gulf of Mexico and the Caribbean Sea. We present and discuss the following conclusions: (1) The amplitude of free infragravity waves can reach several centimeters, higher than the precision sought for the SWOT mission. (2) The free infragravity signal is higher in the Eastern North Pacific than in the Western North Pacific, possibly due to smaller incident swell and seas impacting the nearby coastlines. (3) Free infragravity waves are higher in the North Pacific than in the North Atlantic, possibly owing to different average continental shelves configurations in the two basins. (4) There is a clear seasonal cycle at the high latitudes North Atlantic and Pacific stations that is much less pronounced or absent at the tropical stations, consistent with the generation mechanism of free infragravity waves. Our numerical model
Lane, J.W., Jr.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons
Benedetti, Andrea; Platt, Robert; Atherton, Juli
2014-01-01
Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Sexton, A; Rawlings, L; Jenkins, M; Winship, I
2014-02-01
We present a case where an apparently straightforward Lynch syndrome predictive genetic test of DNA from a blood sample from a woman yielded an unexpected result of X/Y chromosome imbalance. Furthermore, it demonstrates the complexities of genetic testing in people who have had bone marrow transplants. This highlights the potential for multiple ethical and counselling challenges, including the inadvertent testing of the donor. Good communication between clinics and laboratories is essential to overcome such challenges and to minimise the provision of false results. PMID:23990319
NASA Astrophysics Data System (ADS)
Shmelkov, Yuriy; Samujlov, Eugueny
2012-04-01
Comparison of calculation results of transport properties of the solid fuels combustion products was made with known experimental data. Calculation was made by means of the modified program TETRAN developed in G.M. Krzhizhanovsky Power Engineering Institute. The calculation was spent with chemical reactions and phase transformations occurring during combustion. Also ionization of products of solid fuels combustion products at high temperatures was taken into account. In the capacity of fuels various Russian coals and some other solid fuels were considered. As a result of density, viscosity and heat conductivity calculation of a gas phase of solid fuels combustion products the data has been obtained in a range of temperatures 500-20000 K. This comparison has shown good convergence of calculation results with experiment.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
Grabarek, Dawid; Walczak, Elżbieta; Andruniów, Tadeusz
2016-05-10
The effect of the quality of the ground-state geometry on excitation energies in the retinal chromophore minimal model (PSB3) was systematically investigated using various single- (within Møller-Plesset and coupled-cluster frameworks) and multiconfigurational [within complete active space self-consistent field (CASSCF) and CASSCF-based perturbative approaches: second-order CASPT2 and third-order CASPT3] methods. Among investigated methods, only CASPT3 provides geometry in nearly perfect agreement with the CCSD(T)-based equilibrium structure. The second goal of the present study was to assess the performance of the CASPT2 methodology, which is popular in computational spectroscopy of retinals, in describing the excitation energies of low-lying excited states of PSB3 relative to CASPT3 results. The resulting CASPT2 excitation energy error is up to 0.16 eV for the S0 → S1 transition but only up to 0.06 eV for the S0 → S2 transition. Furthermore, CASPT3 excitation energies practically do not depend on modification of the zeroth-order Hamiltonian (so-called IPEA shift parameter), which does dramatically and nonsystematically affect CASPT2 excitation energies. PMID:27049438
Lockwood, M.; Owens, M.
2009-08-20
We survey observations of the radial magnetic field in the heliosphere as a function of position, sunspot number, and sunspot cycle phase. We show that most of the differences between pairs of simultaneous observations, normalized using the square of the heliocentric distance and averaged over solar rotations, are consistent with the kinematic 'flux excess' effect whereby the radial component of the frozen-in heliospheric field is increased by longitudinal solar wind speed structure. In particular, the survey shows that, as expected, the flux excess effect at high latitudes is almost completely absent during sunspot minimum but is almost the same as within the streamer belt at sunspot maximum. We study the uncertainty inherent in the use of the Ulysses result that the radial field is independent of heliographic latitude in the computation of the total open solar flux: we show that after the kinematic correction for the excess flux effect has been made it causes errors that are smaller than 4.5%, with a most likely value of 2.5%. The importance of this result for understanding temporal evolution of the open solar flux is reviewed.
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numercial Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular archi-tecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electrical-like model of this control system, using inexpensive development frameworks, and has been tested and validated with the available experimental data. The objective of this work is to analyse and simulate long-term effects and gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairement which may put in jeopardy a long-term mission is also evaluated. . Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying continuosly from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobic ex-ercise and thermal stress simulating an extra
NASA Technical Reports Server (NTRS)
Durisen, R. H.
1975-01-01
Improved viscous evolutionary sequences of differentially rotating, axisymmetric, nonmagnetic, zero-temperature white-dwarf models are constructed using the relativistically corrected degenerate electron viscosity. The results support the earlier conclusion that angular momentum transport due to viscosity does not lead to overall uniform rotation in many interesting cases. Qualitatively different behaviors are obtained, depending on how the total mass M and angular momentum J compare with the M and J values for which uniformly rotating models exist. Evolutions roughly determine the region in M and J for which models with a particular initial angular momentum distribution can reach carbon-ignition densities in 10 b.y. Such models may represent Type I supernova precursors.
NASA Astrophysics Data System (ADS)
Wang, Ten-See; Dumas, Catherine
1993-07-01
A computational fluid dynamics (CFD) model has been applied to study the transient flow phenomena of the nozzle and exhaust plume of the Space Shuttle Main Engine (SSME), fired at sea level. The CFD model is a time accurate, pressure based, reactive flow solver. A six-species hydrogen/oxygen equilibrium chemistry is used to describe the chemical-thermodynamics. An adaptive upwinding scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the temporal solution. Both engine start-up and shut-down processes were simulated. The elapse time is approximately five seconds for both cases. The computed results were animated and compared with the test. The images for the animation were created with PLOT3D and FAST and then animated with ABEKAS. The hysteresis effects, and the issues of free-shock separation, restricted-shock separation and the end-effects were addressed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Dumas, Catherine
1993-01-01
A computational fluid dynamics (CFD) model has been applied to study the transient flow phenomena of the nozzle and exhaust plume of the Space Shuttle Main Engine (SSME), fired at sea level. The CFD model is a time accurate, pressure based, reactive flow solver. A six-species hydrogen/oxygen equilibrium chemistry is used to describe the chemical-thermodynamics. An adaptive upwinding scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the temporal solution. Both engine start-up and shut-down processes were simulated. The elapse time is approximately five seconds for both cases. The computed results were animated and compared with the test. The images for the animation were created with PLOT3D and FAST and then animated with ABEKAS. The hysteresis effects, and the issues of free-shock separation, restricted-shock separation and the end-effects were addressed.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
NASA Technical Reports Server (NTRS)
Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.
1993-01-01
The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.
NASA Astrophysics Data System (ADS)
Humeau, Anne; Buard, Benjamin; Mahé, Guillaume; Chapeau-Blondeau, François; Rousseau, David; Abraham, Pierre
2010-10-01
To contribute to the understanding of the complex dynamics in the cardiovascular system (CVS), the central CVS has previously been analyzed through multifractal analyses of heart rate variability (HRV) signals that were shown to bring useful contributions. Similar approaches for the peripheral CVS through the analysis of laser Doppler flowmetry (LDF) signals are comparatively very recent. In this direction, we propose here a study of the peripheral CVS through a multifractal analysis of LDF fluctuations, together with a comparison of the results with those obtained on HRV fluctuations simultaneously recorded. To perform these investigations concerning the biophysics of the CVS, first we have to address the problem of selecting a suitable methodology for multifractal analysis, allowing us to extract meaningful interpretations on biophysical signals. For this purpose, we test four existing methodologies of multifractal analysis. We also present a comparison of their applicability and interpretability when implemented on both simulated multifractal signals of reference and on experimental signals from the CVS. One essential outcome of the study is that the multifractal properties observed from both the LDF fluctuations (peripheral CVS) and the HRV fluctuations (central CVS) appear very close and similar over the studied range of scales relevant to physiology.
NASA Astrophysics Data System (ADS)
Randol, Brent M.; Christian, Eric R.
2016-03-01
A parametric study is performed using the electrostatic simulations of Randol and Christian in which the number density, n, and initial thermal speed, θ, are varied. The range of parameters covers an extremely broad plasma regime, all the way from the very weak coupling of space plasmas to the very strong coupling of solid plasmas. The first result is that simulations at the same ΓD, where ΓD (∝ n1/3θ-2) is the plasma coupling parameter, but at different combinations of n and θ, behave exactly the same. As a function of ΓD, the form of p(v), the velocity distribution function of v, the magnitude of v, the velocity vector, is studied. For intermediate to high ΓD, heating is observed in p(v) that obeys conservation of energy, and a suprathermal tail is formed, with a spectral index that depends on ΓD. For strong coupling (ΓD≫1), the form of the tail is v-5, consistent with the findings of Randol and Christian). For weak coupling (ΓD≪1), no acceleration or heating occurs, as there is no free energy. The dependence on N, the number of particles in the simulation, is also explored. There is a subtle dependence in the index of the tail, such that v-5 appears to be the N→∞ limit.
SLAC E155 and E155x Numeric Data Results and Data Plots: Nucleon Spin Structure Functions
The nucleon spin structure functions g1 and g2 are important tools for testing models of nucleon structure and QCD. Experiments at CERN, DESY, and SLAC have measured g1 and g2 using deep inelastic scattering of polarized leptons on polarized nucleon targets. The results of these experiments have established that the quark component of the nucleon helicity is much smaller than naive quark-parton model predictions. The Bjorken sum rule has been confirmed within the uncertainties of experiment and theory. The experiment E155 at SLAC collected data in March and April of 1997. Approximately 170 million scattered electron events were recorded to tape. (Along with several billion inclusive hadron events.) The data were collected using three independent fixed-angle magnetic spectrometers, at approximately 2.75, 5.5, and 10.5 degrees. The momentum acceptance of the 2.75 and 5.5 degree spectrometers ranged from 10 to 40 GeV, with momentum resolution of 2-4%. The 10.5 degree spectrometer, new for E155, accepted events of 7 GeV to 20 GeV. Each spectrometer used threshold gas Cerenkov counters (for particle ID), a segmented lead-glass calorimeter (for energy measurement and particle ID), and plastic scintillator hodoscopes (for tracking and momentum measurement). The polarized targets used for E155 were 15NH3 and 6LiD, as targets for measuring the proton and deuteron spin structure functions respectively. Experiment E155x recently concluded a successful two-month run at SLAC. The experiment was designed to measure the transverse spin structure functions of the proton and deuteron. The E155 target was also recently in use at TJNAF's Hall C (E93-026) and was returned to SLAC for E155x. E155x hopes to reduce the world data set errors on g2 by a factor of three. [Copied from http://www.slac.stanford.edu/exp/e155/e155_nickeltour.html, an information summary linked off the E155 home page at http://www.slac.stanford.edu/exp/e155/e155_home.html. The extension run, E155x, also makes
On accuracy of holographic shape measurement method with spherical wave illumination
NASA Astrophysics Data System (ADS)
Mikuła, Marta; Kozacki, Tomasz; Kostencka, Julianna; LiŻewski, Kamil; Józwik, Michał
2014-11-01
This paper presents the study on the accuracy of topography measurement of high numerical aperture focusing microobjects in digital holographic microscope setup. The system works in reflective configuration with spherical wave illumination. For numerical reconstruction of topography of high NA focusing microobjects we are using two algorithms: Thin Element Approximation (TEA) and Spherical Local Ray Approximation (SLRA). In this paper we show comparison of the accuracy of topography reconstruction results using these algorithms. We show superiority of SLRA method. However, to obtain accurate results two experimental conditions have to be determined: the position of point source (PS) and imaging reference plane (IRP).Therefore we simulate the effect of point source (PS) and imaging reference plane (IRP) position on the accuracy of shape calculation. Moreover we evaluate accuracy of determination of location of PS and IRP and finally present measurement result of microlens object.
NASA Technical Reports Server (NTRS)
Westphalen, H.; Spjeldvik, W. N.
1982-01-01
A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)
NASA Technical Reports Server (NTRS)
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
A comparison of implicit numerical methods for solving the transient spherical diffusion equation
NASA Technical Reports Server (NTRS)
Curry, D. M.
1977-01-01
Comparative numerical temperature results obtained by using two implicit finite difference procedures for the solution of the transient diffusion equation in spherical coordinates are presented. The validity and accuracy of these solutions are demonstrated by comparison with exact analytical solutions.
Meditation Experience Predicts Introspective Accuracy
Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina
2012-01-01
The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790
NASA Technical Reports Server (NTRS)
Back, L. H.
1972-01-01
The laminar flow equations in differential form are solved numerically on a digital computer for flow of a very high temperature gas through the entrance region of an externally cooled tube. The solution method is described and calculations are carried out in conjunction with experimental measurements. The agreement with experiment is good, with the result indicating relatively large energy and momentum losses in the highly cooled flows considered where the pressure is nearly uniform along the flow and the core flow becomes non-adiabatic a few diameters downstream of the inlet. The effects of a large range of Reynolds number and Mach number (viscous dissipation) are also investigated.
Evaluating LANDSAT wildland classification accuracies
NASA Technical Reports Server (NTRS)
Toll, D. L.
1980-01-01
Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.
MFIX documentation numerical technique
Syamlal, M.
1998-01-01
MFIX (Multiphase Flow with Interphase eXchanges) is a general-purpose hydrodynamic model for describing chemical reactions and heat transfer in dense or dilute fluid-solids flows, which typically occur in energy conversion and chemical processing reactors. The calculations give time-dependent information on pressure, temperature, composition, and velocity distributions in the reactors. The theoretical basis of the calculations is described in the MFIX Theory Guide. Installation of the code, setting up of a run, and post-processing of results are described in MFIX User`s manual. Work was started in April 1996 to increase the execution speed and accuracy of the code, which has resulted in MFIX 2.0. To improve the speed of the code the old algorithm was replaced by a more implicit algorithm. In different test cases conducted the new version runs 3 to 30 times faster than the old version. To increase the accuracy of the computations, second order accurate discretization schemes were included in MFIX 2.0. Bubbling fluidized bed simulations conducted with a second order scheme show that the predicted bubble shape is rounded, unlike the (unphysical) pointed shape predicted by the first order upwind scheme. This report describes the numerical technique used in MFIX 2.0.
NASA Technical Reports Server (NTRS)
Cabra, R.; Chen, J. Y.; Dibble, R. W.; Myhrvold, T.; Karpetis, A. N.; Barlow, R. S.
2002-01-01
An experiment and numerical investigation is presented of a lifted turbulent H2/N2 jet flame in a coflow of hot, vitiated gases. The vitiated coflow burner emulates the coupling of turbulent mixing and chemical kinetics exemplary of the reacting flow in the recirculation region of advanced combustors. It also simplifies numerical investigation of this coupled problem by removing the complexity of recirculating flow. Scalar measurements are reported for a lifted turbulent jet flame of H2/N2 (Re = 23,600, H/d = 10) in a coflow of hot combustion products from a lean H2/Air flame ((empty set) = 0.25, T = 1,045 K). The combination of Rayleigh scattering, Raman scattering, and laser-induced fluorescence is used to obtain simultaneous measurements of temperature and concentrations of the major species, OH, and NO. The data attest to the success of the experimental design in providing a uniform vitiated coflow throughout the entire test region. Two combustion models (PDF: joint scalar Probability Density Function and EDC: Eddy Dissipation Concept) are used in conjunction with various turbulence models to predict the lift-off height (H(sub PDF)/d = 7,H(sub EDC)/d = 8.5). Kalghatgi's classic phenomenological theory, which is based on scaling arguments, yields a reasonably accurate prediction (H(sub K)/d = 11.4) of the lift-off height for the present flame. The vitiated coflow admits the possibility of auto-ignition of mixed fluid, and the success of the present parabolic implementation of the PDF model in predicting a stable lifted flame is attributable to such ignition. The measurements indicate a thickened turbulent reaction zone at the flame base. Experimental results and numerical investigations support the plausibility of turbulent premixed flame propagation by small scale (on the order of the flame thickness) recirculation and mixing of hot products into reactants and subsequent rapid ignition of the mixture.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
High-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
NASA Astrophysics Data System (ADS)
Agus, M.; Mascia, M. L.; Fastame, M. C.; Melis, V.; Pilloni, M. C.; Penna, M. P.
2015-02-01
A body of literature shows the significant role of visual-spatial skills played in the improvement of mathematical skills in the primary school. The main goal of the current study was to investigate the impact of a combined visuo-spatial and mathematical training on the improvement of mathematical skills in 146 second graders of several schools located in Italy. Participants were presented single pencil-and-paper visuo-spatial or mathematical trainings, computerised version of the above mentioned treatments, as well as a combined version of computer-assisted and pencil-and-paper visuo-spatial and mathematical trainings, respectively. Experimental groups were presented with training for 3 months, once a week. All children were treated collectively both in computer-assisted or pencil-and-paper modalities. At pre and post-test all our participants were presented with a battery of objective tests assessing numerical and visuo-spatial abilities. Our results suggest the positive effect of different types of training for the empowerment of visuo-spatial and numerical abilities. Specifically, the combination of computerised and pencil-and-paper versions of visuo-spatial and mathematical trainings are more effective than the single execution of the software or of the pencil-and-paper treatment.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
NASA Astrophysics Data System (ADS)
Macario Galang, Jan Albert; Narod Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo
2015-04-01
The M 7.2 October 15, 2013 Bohol earthquake is the most destructive earthquake to hit the Philippines since 2012. The epicenter was located in Sagbayan municipality, central Bohol and was generated by a previously unmapped reverse fault called the "Inabanga Fault". Its name, taken after the barangay (village) where the fault is best exposed and was first seen. The earthquake resulted in 209 fatalities and over 57 billion USD worth of damages. The earthquake generated co-seismic landslides most of which were related to fault structures. Unlike rainfall induced landslides, the trigger for co-seismic landslides happen without warning. Preparedness against this type of landslide therefore, relies heavily on the identification of fracture-related unstable slopes. To mitigate the impacts of co-seismic landslide hazards, morpho-structural orientations or discontinuity sets were mapped in the field with the aid of a 2012 IFSAR Digital Terrain Model (DTM) with 5-meter pixel resolution and < 0.5 meter vertical accuracy. Coltop 3D software was then used to identify similar structures including measurement of their dip and dip directions. The chosen discontinuity sets were then keyed into Matterocking software to identify potential rock slide zones due to planar or wedged discontinuities. After identifying the structurally-controlled unstable slopes, the rock mass propagation extent of the possible rock slides was simulated using Conefall. The results were compared to a post-earthquake landslide inventory of 456 landslides. Out the total number of landslides identified from post-earthquake high-resolution imagery, 366 or 80% intersect the structural-controlled hazard areas of Bohol. The results show the potential of this method to identify co-seismic landslide hazard areas for disaster mitigation. Along with computer methods to simulate shallow landslides, and debris flow paths, located structurally-controlled unstable zones can be used to mark unsafe areas for settlement. The
A hybrid numerical scheme for the numerical solution of the Burgers' equation
NASA Astrophysics Data System (ADS)
Jiwari, Ram
2015-03-01
In this article, a hybrid numerical scheme based on Euler implicit method, quasilinearization and uniform Haar wavelets has been developed for the numerical solutions of Burgers' equation. Most of the numerical methods available in the literature fail to capture the physical behavior of the equations when viscosity ν → 0. In Jiwari (2012), the author presented the numerical results up to ν = 0.003 and the scheme failed for values smaller than ν = 0.003. The main aim in the development of the present scheme is to overcome the drawback of the scheme developed in Jiwari (2012). Lastly, three test problems are chosen to check the accuracy of the proposed scheme. The approximated results are compared with existing numerical and exact solutions found in literature. The use of uniform Haar wavelet is found to be accurate, simple, fast, flexible, convenient and at small computation costs.
NASA Astrophysics Data System (ADS)
Boerstoel, J. W.
1988-01-01
The current status of a computer program system for the numerical simulation of Euler flows is presented. Preliminary test calculation results are shown. They concern the three-dimensional flow around a wing-nacelle-propeller-outlet configuration. The system is constructed to execute four major tasks: block decomposition of the flow domain around given, possibly complex, three-dimensional aerodynamic surfaces; grid generation on the blocked flow domain; Euler-flow simulation on the blocked grid; and graphical visualization of the computed flow on the blocked grid, and postprocessing. The system consists of about 20 codes interfaced by files. Most of the required tasks can be executed. The geometry of complex aerodynamic surfaces in three-dimensional space can be handled. The validation test showed that the system must be improved to increase the speed of the grid generation process.
NASA Astrophysics Data System (ADS)
Wang, Y.; Qin, G.; Zhang, M.
2012-12-01
Solar energetic particle (SEP) fluxes data measured by multi-spacecraft are able to provide important information of the transport process of SEPs accelerated by the interplanetary coronal mass ejection (ICME) shock. Depending on their locations, observers in interplanetary space may be connected to different parts of an ICME shock by the interplanetary magnetic field (IMF). Simultaneous observations by multi-spacecraft in the ecliptic, e.g., ACE, STEREO A and B, usually show huge differences of SEP time profiles. In this work, based on a numerical solution of the Fokker-Planck transport equation for energetic particles, we will obtain the fluxes of SEPs accelerated by ICME shocks. In addition, we will compare SEP events measured by these spacecraft, located at different longitudes, with our simulation results. The comparison has enabled us to determine the parameters of particle transport such as the parallel and perpendicular diffusion coefficients and the efficiency of particles injections at the ICME shock.
Improving Speaking Accuracy through Awareness
ERIC Educational Resources Information Center
Dormer, Jan Edwards
2013-01-01
Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…
Accuracy of the domain method for the material derivative approach to shape design sensitivities
NASA Technical Reports Server (NTRS)
Yang, R. J.; Botkin, M. E.
1987-01-01
Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.
How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy
Weaver, Steven A.; Ucar, Zennure; Bettinger, Pete; Merry, Krista
2015-01-01
The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an
Meteor orbit determination with improved accuracy
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria
2015-08-01
Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
NASA Technical Reports Server (NTRS)
Newman, P. A.; Allison, D. O.
1974-01-01
Numerical results obtained from two computer programs recently developed with NASA support and now available for use by others are compared with some sample experimental data taken on a rectangular-wing configuration in the AEDC 16-Foot Transonic Tunnel at transonic and subsonic flow conditions. This data was used in an AEDC investigation as reference data to deduce the tunnel-wall interference effects for corresponding data taken in a smaller tunnel. The comparisons were originally intended to see how well a current state-of-the-art transonic flow calculation for a simple 3-D wing agreed with data which was felt by experimentalists to be relatively interference-free. As a result of the discrepancies between the experimental data and computational results at the quoted angle of attack, it was then deduced from an approximate stress analysis that the sting had deflected appreciably. Thus, the comparisons themselves are not so meaningful, since the calculations must be repeated at the proper angle of attack. Of more importance, however, is a demonstration of the utility of currently available computational tools in the analysis and correlation of transonic experimental data.
NASA Astrophysics Data System (ADS)
Li, Xiaoping; Hunt, Katharine L. C.; Pipin, Janusz; Bishop, David M.
1996-12-01
For atoms or molecules of D∞h or higher symmetry, this work gives equations for the long-range, collision-induced changes in the first (Δβ) and second (Δγ) hyperpolarizabilities, complete to order R-7 in the intermolecular separation R for Δβ, and order R-6 for Δγ. The results include nonlinear dipole-induced-dipole (DID) interactions, higher multipole induction, induction due to the nonuniformity of the local fields, back induction, and dispersion. For pairs containing H or He, we have used ab initio values of the static (hyper)polarizabilities to obtain numerical results for the induction terms in Δβ and Δγ. For dispersion effects, we have derived analytic results in the form of integrals of the dynamic (hyper)polarizabilities over imaginary frequencies, and we have evaluated these numerically for the pairs H...H, H...He, and He...He using the values of the fourth dipole hyperpolarizability ɛ(-iω; iω, 0, 0, 0, 0) obtained in this work, along with other hyperpolarizabilities calculated previously by Bishop and Pipin. For later numerical applications to molecular pairs, we have developed constant ratio approximations (CRA1 and CRA2) to estimate the dispersion effects in terms of static (hyper)polarizabilities and van der Waals energy or polarizability coefficients. Tests of the approximations against accurate results for the pairs H...H, H...He, and He...He show that the root mean square (rms) error in CRA1 is ˜20%-25% for Δβ and Δγ; for CRA2 the error in Δβ is similar, but the rms error in Δγ is less than 4%. At separations ˜1.0 a.u. outside the van der Waals minima of the pair potentials for H...H, H...He, and He...He, the nonlinear DID interactions make the dominant contributions to Δγzzzz (where z is the interatomic axis) and to Δγxxxx, accounting for ˜80%-123% of the total value. Contributions due to higher-multipole induction and the nonuniformity of the local field (Qα terms) may exceed 15%, while dispersion effects
Gregoire, C.; Joesten, P.K.; Lane, J.W., Jr.
2006-01-01
Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature
NASA Astrophysics Data System (ADS)
Evtushenko, Yu. G.; Posypkin, M. A.
2013-02-01
The nonuniform covering method is applied to multicriteria optimization problems. The ɛ-Pareto set is defined, and its properties are examined. An algorithm for constructing an ɛ-Pareto set with guaranteed accuracy ɛ is described. The efficiency of implementing this approach is discussed, and numerical results are presented.
GEOSPATIAL DATA ACCURACY ASSESSMENT
The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...
Accuracy in optical overlay metrology
NASA Astrophysics Data System (ADS)
Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark
2016-03-01
In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.
Dust trajectory sensor: accuracy and data analysis.
Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R
2011-10-01
The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction. PMID:22047326
Dust trajectory sensor: Accuracy and data analysis
NASA Astrophysics Data System (ADS)
Xie, J.; Sternovsky, Z.; Grün, E.; Auer, S.; Duncan, N.; Drake, K.; Le, H.; Horanyi, M.; Srama, R.
2011-10-01
The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008), 10.1063/1.2960566] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010), 10.1016/j.nima.2010.06.091]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2011-01-01
Flow matching has been successfully achieved for an MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment helped perform a thermodynamic cycle analysis to properly match the flows from an inlet employing a MHD energy bypass system (consisting of an MHD generator and MHD accelerator) on a supersonic turbojet engine. Working with various operating conditions (such as the applied magnetic field, MHD generator length and flow conductivity), interfacing studies were conducted between the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis. This paper further describes the analysis of a supersonic turbojet engine with an MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to a range of 0 to 7.0 Mach with specific net thrust range of 740 N-s/kg (at ambient Mach = 3.25) to 70 N-s/kg (at ambient Mach = 7). These results were achieved with an applied magnetic field of 2.5 Tesla and conductivity levels in a range from 2 mhos/m (ambient Mach = 7) to 5.5 mhos/m (ambient Mach = 3.5) for an MHD generator length of 3 m.
Accuracy evaluation of 3D lidar data from small UAV
NASA Astrophysics Data System (ADS)
Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav
2015-10-01
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
Reticence, Accuracy and Efficacy
NASA Astrophysics Data System (ADS)
Oreskes, N.; Lewandowsky, S.
2015-12-01
James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2010-01-01
Preliminary flow matching has been demonstrated for a MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment was used to perform a thermodynamic cycle analysis to properly match the flows from an inlet to a MHD generator and from the exit of a supersonic turbojet to a MHD accelerator. Working with various operating conditions such as the enthalpy extraction ratio and isentropic efficiency of the MHD generator and MHD accelerator, interfacing studies were conducted between the pre-ionizers, the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis and describes the NPSS analysis of a supersonic turbojet engine with a MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to an explored and desired range of 0 to 7.0 Mach.
NASA Astrophysics Data System (ADS)
Vitiello, Antonio; Squillace, Antonino; Prisco, Umberto
2007-02-01
Shape memory alloys (SMA) are a particular family of materials, discovered during the 1930s and only now used in technological applications, with the property of returning to an imposed shape after a deformation and heating process. The study of the mechanical behaviour of SMA, through a proper constitutive model, and the possible ensuing applications form the core of an interesting research field, developed in the last few years and still now subject to studies driven by the aim of understanding and characterizing the peculiar properties of these materials. The aim of this work is to study the behaviour of SMA under torsional loads. To obtain a forecast of the mechanical response of the SMA, we utilized a numerical algorithm based on the Boyd-Lagoudas model and then we compared the results with those from some experimental tests. The experiments were conducted by subjecting helicoidal springs with a constant cross section to a traction load. It is well known, in fact, that in such springs the main stress under traction loads is almost completely a pure torsional stress field. The interest in these studies is due to the absence of data on such tests in the literature for SMA, and because there are an increasing number of industrial applications where SMA are subjected to torsional load, in particular in medicine, and especially in orthodontic drills which usually work under torsional loads.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.; Baker, D.
1994-01-01
This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.
Accuracy of non-Newtonian Lattice Boltzmann simulations
NASA Astrophysics Data System (ADS)
Conrad, Daniel; Schneider, Andreas; Böhle, Martin
2015-11-01
This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.
NASA Technical Reports Server (NTRS)
Gomberg, Joan; Ellis, Michael
1994-01-01
We present results of a series of numerical experiments designed to test hypothetical mechanisms that derive deformation in the New Madrid seismic zone. Experiments are constrained by subtle topography and the distribution of seismicity in the region. We use a new boundary element algorithm that permits calcuation of the three-dimensional deformation field. Surface displacement fields are calculated for the New Madrid zone under both far-field (plate tectonics scale) and locally derived driving strains. Results demonstrate that surface displacement fields cannot distinguish between either a far-field simple or pure shear strain field or one that involves a deep shear zone beneath the upper crustal faults. Thus, neither geomorphic nor geodetic studies alone are expected to reveal the ultimate driving mechanism behind the present-day deformation. We have also tested hypotheses about strain accommodation within the New Madrid contractional step-over by including linking faults, two southwest dipping and one vertical, recently inferred from microearthquake data. Only those models with step-over faults are able to predict the observed topography. Surface displacement fields for long-term, relaxed deformation predict the distribution of uplift and subsidence in the contractional step-over remarkably well. Generation of these displacement fields appear to require slip on both the two northeast trending vertical faults and the two dipping faults in the step-over region, with very minor displacements occurring during the interseismic period when the northeast trending vertical faults are locked. These models suggest that the gently dippling central step-over fault is a reverse fault and that the steeper fault, extending to the southeast of the step-over, acts as a normal fault over the long term.
Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation
Ismail, M. S.
2010-09-30
The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.
Interoceptive accuracy and panic.
Zoellner, L A; Craske, M G
1999-12-01
Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462
Higher-order numerical solutions using cubic splines. [for partial differential equations
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-
Tracking accuracy assessment for concentrator photovoltaic systems
NASA Astrophysics Data System (ADS)
Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.
2010-10-01
The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.
Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L
2016-04-14
Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID
NASA Astrophysics Data System (ADS)
Dordevic, Mladen; Georgen, Jennifer
2016-03-01
Mantle plumes rising in the vicinity of mid-ocean ridges often generate anomalies in melt production and seafloor depth. This study investigates the dynamical interactions between a mantle plume and a ridge-ridge-ridge triple junction, using a parameter space approach and a suite of steady state, three-dimensional finite element numerical models. The top domain boundary is composed of three diverging plates, with each assigned half-spreading rates with respect to a fixed triple junction point. The bottom boundary is kept at a constant temperature of 1350°C except where a two-dimensional, Gaussian-shaped thermal anomaly simulating a plume is imposed. Models vary plume diameter, plume location, the viscosity contrast between plume and ambient mantle material, and the use of dehydration rheology in calculating viscosity. Importantly, the model results quantify how plume-related anomalies in mantle temperature pattern, seafloor depth, and crustal thickness depend on the specific set of parameters. To provide an example, one way of assessing the effect of conduit position is to calculate normalized area, defined to be the spatial dispersion of a given plume at specific depth (here selected to be 50 km) divided by the area occupied by the same plume when it is located under the triple junction. For one particular case modeled where the plume is centered in an intraplate position 100 km from the triple junction, normalized area is just 55%. Overall, these models provide a framework for better understanding plateau formation at triple junctions in the natural setting and a tool for constraining subsurface geodynamical processes and plume properties.
Performance and accuracy benchmarks for a next generation geodynamo simulation
NASA Astrophysics Data System (ADS)
Matsui, H.
2015-12-01
A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.
New analytical algorithm for overlay accuracy
NASA Astrophysics Data System (ADS)
Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo
2012-03-01
The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore
Numerical simulations in combustion
NASA Technical Reports Server (NTRS)
Chung, T. J.
1989-01-01
This paper reviews numerical simulations in reacting flows in general and combustion phenomena in particular. It is shown that use of implicit schemes and/or adaptive mesh strategies can improve convergence, stability, and accuracy of the solution. Difficulties increase as turbulence and multidimensions are considered, particularly when finite-rate chemistry governs the given combustion problem. Particular attention is given to the areas of solid-propellant combustion dynamics, turbulent diffusion flames, and spray droplet vaporization.
Large deflection of clamped circular plate and accuracy of its approximate analytical solutions
NASA Astrophysics Data System (ADS)
Zhang, Yin
2016-02-01
A different set of governing equations on the large deflection of plates are derived by the principle of virtual work (PVW), which also leads to a different set of boundary conditions. Boundary conditions play an important role in determining the computation accuracy of the large deflection of plates. Our boundary conditions are shown to be more appropriate by analyzing their difference with the previous ones. The accuracy of approximate analytical solutions is important to the bulge/blister tests and the application of various sensors with the plate structure. Different approximate analytical solutions are presented and their accuracies are evaluated by comparing them with the numerical results. The error sources are also analyzed. A new approximate analytical solution is proposed and shown to have a better approximation. The approximate analytical solution offers a much simpler and more direct framework to study the plate-membrane transition behavior of deflection as compared with the previous approaches of complex numerical integration.
CHARMS: The Cryogenic, High-Accuracy Refraction Measuring System
NASA Technical Reports Server (NTRS)
Frey, Bradley; Leviton, Douglas
2004-01-01
The success of numerous upcoming NASA infrared (IR) missions will rely critically on accurate knowledge of the IR refractive indices of their constituent optical components at design operating temperatures. To satisfy the demand for such data, we have built a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS), which, for typical 1R materials. can measure the index of refraction accurate to (+ or -) 5 x 10sup -3 . This versatile, one-of-a-kind facility can also measure refractive index over a wide range of wavelengths, from 0.105 um in the far-ultraviolet to 6 um in the IR, and over a wide range of temperatures, from 10 K to 100 degrees C, all with comparable accuracies. We first summarize the technical challenges we faced and engineering solutions we developed during the construction of CHARMS. Next we present our "first light," index of refraction data for fused silica and compare our data to previously published results.
Highly Parallel, High-Precision Numerical Integration
Bailey, David H.; Borwein, Jonathan M.
2005-04-22
This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.
Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.
2015-01-01
Accuracy assessment is an extensive effort that requires seasonal field personnel and equipment, data entry, analyses, and post processing—tasks that are costly and time consuming. The geospatial team at the UMESC has suggested a validation process for understanding the accuracy of the spatial datasets, which will be tested on at least some areas of the UMRS. Validation is not a true verification of map-class type in the field; however, it can provide the user of the map with useful information that is similar to a field AA
NASA Astrophysics Data System (ADS)
Lee, C.; Lim, C.
2013-12-01
The geochemistry of the transient Miocene adakites (~16 Ma) in the Abukuma Mountains, Northeast Japan shows that the adakites were generated by the partial melting of the subducted oceanic crust. However, the very old age of the converging oceanic plate which cannot yield high slab temperatures enough for the partial melting poses a problem for the genesis of the adakites. Other possible geneses such as the partial melting of the lower crust, flat subduction and/or transient cold plume are not relevant to the genesis of the adakites. Instead, it is thought that the injection of the upwelling hot asthenospheric mantle to the mantle wedge caused by the East Sea (Japan Sea) opening heats the cold subducting slab hotter enough for the partial melting of the oceanic crust. Although the hypothesis is promising, quantitative evaluation of the interaction between the cold Pacific slab and hot asthenospheric mantle has not been carried out. Thus, we conducted a series of 2-dimensional kinematic-dynamic subduction model experiments to evaluate the thermal structures of the subducting slab, essential for the partial melting of the oceanic crust. Since time-dependence is crucial for the transient adakites, the time-evolving convergence rate and slab age of the incoming Pacific plate for the last 65 Ma constrained from a recent plate reconstruction model are implemented in the numerical models with the transient hot asthenospheric mantle. The convergence rate and slab age are implemented along the oceanward wall boundary and updated each time step. The mantle potential temperature of 1350 °C and the mantle adiabat of 0.35 °C/km are used. The transient injection of the hot asthenospheric mantle to the mantle wedge is implemented as a function of depth- and time-dependent normal temperature distribution along the arcward wall boundary and updated each time step. The peak temperature of the hot asthenospheric mantle is assumed as 1550 °C at 100 km depth and the standard
NASA Astrophysics Data System (ADS)
Sweeney, Matthew R.; Valentine, Greg A.
2015-09-01
Most volcanoes experience some degree of phreatomagmatism during their lifetime. However, the current understanding of such processes remains limited relative to their magmatic counterparts. Maar-diatremes are a common volcano type that form primarily from phreatomagmatic explosions and are an ideal candidate to further our knowledge of deposits and processes resulting from explosive magma-water interaction due to their abundance as well as their variable levels of field exposure, which allows for detailed mapping and componentry. Two conceptual models of maar-diatreme volcanoes explain the growth and evolution of the crater (maar) and subsurface vent (diatreme) through repeated explosions caused by the interaction of magma and groundwater. One model predicts progressively deepening explosions as water is used up by phreatomagmatic explosions while the other allows for explosions at any level in the diatreme, provided adequate hydrologic conditions are present. In the former, deep-seated lithics in the diatreme are directly ejected and their presence in tephra rings is often taken as a proxy for the depth at which that particular explosion occurred. In the latter, deep-seated lithics are incrementally transported toward the surface via upward directed debris jets. Here we present a novel application of multiphase numerical modeling to assess the controls on length scales of debris jets and their role in upward transport of intra-diatreme material to determine the validity of the two models. The volume of gas generated during a phreatomagmatic explosion is a first order control on the vertical distance a debris jet travels. Unless extremely large amounts of magma and water are involved, it is unlikely that most explosions deeper than ∼ 250 m breach the surface. Other factors such as pressure and temperature have lesser effects on the length scales assuming they are within realistic ranges. Redistribution of material within a diatreme is primarily driven by
A numerical method for interface problems in elastodynamics
NASA Technical Reports Server (NTRS)
Mcghee, D. S.
1984-01-01
The numerical implementation of a formulation for a class of interface problems in elastodynamics is discussed. This formulation combines the use of the finite element and boundary integral methods to represent the interior and the exteriro regions, respectively. In particular, the response of a semicylindrical alluvial valley in a homogeneous halfspace to incident antiplane SH waves is considered to determine the accuracy and convergence of the numerical procedure. Numerical results are obtained from several combinations of the incidence angle, frequency of excitation, and relative stiffness between the inclusion and the surrounding halfspace. The results tend to confirm the theoretical estimates that the convergence is of the order H(2) for the piecewise linear elements used. It was also observed that the accuracy descreases as the frequency of excitation increases or as the relative stiffness of the inclusion decreases.
Accuracy assessment of GPS satellite orbits
NASA Technical Reports Server (NTRS)
Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.
Individual Differences in Eyewitness Recall Accuracy.
ERIC Educational Resources Information Center
Berger, James D.; Herringer, Lawrence G.
1991-01-01
Presents study results comparing college students' self-evaluation of recall accuracy to actual recall of detail after viewing a crime scenario. Reports that self-reported ability to remember detail correlates with accuracy in memory of specifics. Concludes that people may have a good indication early in the eyewitness situation of whether they…
Developing a Weighted Measure of Speech Sound Accuracy
ERIC Educational Resources Information Center
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2011-01-01
Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…
NASA Astrophysics Data System (ADS)
Voronov, Nikolai; Dikinis, Alexandr
2015-04-01
Modern technologies of remote sensing (RS) open wide opportunities for monitoring and increasing the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. The RS data do not supersede ground-based observations, but they allow to solve new problems in the area of hydrological and meteorological monitoring and forecasting. In particular, the data of satellite, aviation or radar observations may be used for increasing of special-temporal discreteness of hydrometeorological observations. Besides, what seems very promising is conjunctive use of the data of remote sensing, ground-based observations and the "output" of hydrodynamical weather models, which allows to increase significantly the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. Modern technologies of monitoring and forecasting of hazardous of hazardous hydrometeorological phenomena on the basis of conjunctive use of the data of satellite, aviation and ground-based observations, as well as the output data of hydrodynamical weather models are considered. It is noted that an important and promising method of monitoring is bioindication - surveillance over response of the biota to external influence and behavior of animals that are able to be presentient of convulsions of nature. Implement of the described approaches allows to reduce significantly both the damage caused by certain hazardous hydrological and meteorological phenomena and the general level of hydrometeorological vulnerability of certain different-purpose objects and the RF economy as a whole.
Numerical experiments in homogeneous turbulence
NASA Technical Reports Server (NTRS)
Rogallo, R. S.
1981-01-01
The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.
NASA Astrophysics Data System (ADS)
Eason, R. P.; Sun, C.; Dick, A. J.; Nagarajaiah, S.
2015-05-01
Response attenuation of a linear primary structure (PS)-nonlinear tuned mass damper (NTMD) dynamic system with and without an adaptive-length pendulum tuned mass damper (ALPTMD) in a series configuration is studied by using numerical and experimental methods. In the PS-NTMD system, coexisting high and low amplitude solutions are observed in the experiment, validating previous numerical efforts. In order to eliminate the potentially dangerous high amplitude solutions, a series ALPTMD with a mass multiple orders of magnitude smaller than the PS is added to the NTMD. The ALPTMD is used in order to represent the steady-state behavior of a smart tuned mass damper (STMD). In the experiment, the length of the pendulum is adjusted such that its natural frequency matches the dominant frequency of the harmonic ground motions. In the present study, the proposed ALPTMD can be locked so that it is unable to oscillate and influence the dynamics of the system in order to obtain the benefits provided by the NTMD. The experimental data show good qualitative agreement with numerical predictions computed with parameter continuation and time integration methods. Activation of the ALPTMD can successfully prevent the transition of the response from the low amplitude solution to the high amplitude solution or return the response from the high amplitude solution to the low amplitude solution, thereby protecting the PS.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
ERIC Educational Resources Information Center
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
Numerical simulation of conservation laws
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; To, Wai-Ming
1992-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.
Accuracy analysis of distributed simulation systems
NASA Astrophysics Data System (ADS)
Lin, Qi; Guo, Jing
2010-08-01
Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
High accuracy OMEGA timekeeping
NASA Technical Reports Server (NTRS)
Imbier, E. A.
1982-01-01
The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.
Franke, O. Lehn; Reilly, Thomas E.
1987-01-01
The most critical and difficult aspect of defining a groundwater system or problem for conceptual analysis or numerical simulation is the selection of boundary conditions . This report demonstrates the effects of different boundary conditions on the steady-state response of otherwise similar ground-water systems to a pumping stress. Three series of numerical experiments illustrate the behavior of three hypothetical groundwater systems that are rectangular sand prisms with the same dimensions but with different combinations of constant-head, specified-head, no-flow, and constant-flux boundary conditions. In the first series of numerical experiments, the heads and flows in all three systems are identical, as are the hydraulic conductivity and system geometry . However, when the systems are subjected to an equal stress by a pumping well in the third series, each differs significantly in its response . The highest heads (smallest drawdowns) and flows occur in the systems most constrained by constant- or specified-head boundaries. These and other observations described herein are important in steady-state calibration, which is an integral part of simulating many ground-water systems. Because the effects of boundary conditions on model response often become evident only when the system is stressed, a close match between the potential distribution in the model and that in the unstressed natural system does not guarantee that the model boundary conditions correctly represent those in the natural system . In conclusion, the boundary conditions that are selected for simulation of a ground-water system are fundamentally important to groundwater systems analysis and warrant continual reevaluation and modification as investigation proceeds and new information and understanding are acquired.
Municipal water consumption forecast accuracy
NASA Astrophysics Data System (ADS)
Fullerton, Thomas M.; Molina, Angel L.
2010-06-01
Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.
Analysis of deformable image registration accuracy using computational modeling.
Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J
2010-03-01
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
NASA Astrophysics Data System (ADS)
Vlasenko, Vasiliy; Stashchuk, Nataliya; Inall, Mark; Hopkins, Jo
2015-04-01
The three-dimensional dynamics of baroclinic tides in the shelf-slope area of the Celtic Sea were investigated numerically and using observational data collected on the 376-th cruise of the R/V ``Discovery'' in June 2012. The time series recorded at a shelf-break mooring showed that semi-diurnal internal waves were accompanied by packets of internal solitary waves with maximum amplitudes up to 105 m, the largest internal waves ever recorded in the Celtic Sea. The observed baroclinic wave fields were replicated numerically using the Massachusetts Institute of Technology general circulation model. A fine-resolution grid with 115 m horizontal and 10 m vertical steps allowed the identification of two classes of short-scale internal waves. The first class was generated over headlands and resembles spiral-type internal waves that are typical for isolated underwater banks. The second class, generated within an area of isolated canyons, revealed properties of quasi-plane internal wave packets. The observed in-situ intensification of tidal bottom currents at the shelf break mooring is explained in terms of a tidal beam that was formed over supercritical bottom topography.
Numerical simulation for fan broadband noise prediction
NASA Astrophysics Data System (ADS)
Hase, Takaaki; Yamasaki, Nobuhiko; Ooishi, Tsutomu
2011-03-01
In order to elucidate the broadband noise of fan, the numerical simulation of fan operating at two different rotational speeds is carried out using the three-dimensional unsteady Reynolds-averaged Navier-Stokes (URANS) equations. The computed results are compared to experiment to estimate its accuracy and are found to show good agreement with experiment. A method is proposed to evaluate the turbulent kinetic energy in the framework of the Spalart-Allmaras one equation turbulence model. From the calculation results, the turbulent kinetic energy is visualized as the turbulence of the flow which leads to generate the broadband noise, and its noise sources are identified.
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012
Numerical discrimination is mediated by neural coding variation.
Prather, Richard W
2014-12-01
One foundation of numerical cognition is that discrimination accuracy depends on the proportional difference between compared values, closely following the Weber-Fechner discrimination law. Performance in non-symbolic numerical discrimination is used to calculate individual Weber fraction, a measure of relative acuity of the approximate number system (ANS). Individual Weber fraction is linked to symbolic arithmetic skills and long-term educational and economic outcomes. The present findings suggest that numerical discrimination performance depends on both the proportional difference and absolute value, deviating from the Weber-Fechner law. The effect of absolute value is predicted via computational model based on the neural correlates of numerical perception. Specifically, that the neural coding "noise" varies across corresponding numerosities. A computational model using firing rate variation based on neural data demonstrates a significant interaction between ratio difference and absolute value in predicting numerical discriminability. We find that both behavioral and computational data show an interaction between ratio difference and absolute value on numerical discrimination accuracy. These results further suggest a reexamination of the mechanisms involved in non-symbolic numerical discrimination, how researchers may measure individual performance, and what outcomes performance may predict. PMID:25238315
Accuracy in Judgments of Aggressiveness
Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David
2009-01-01
Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243
Geometric accuracy in airborne SAR images
NASA Technical Reports Server (NTRS)
Blacknell, D.; Quegan, S.; Ward, I. A.; Freeman, A.; Finley, I. P.
1989-01-01
Uncorrected across-track motions of a synthetic aperture radar (SAR) platform can cause both a severe loss of azimuthal positioning accuracy in, and defocusing of, the resultant SAR image. It is shown how the results of an autofocus procedure can be incorporated in the azimuth processing to produce a fully focused image that is geometrically accurate in azimuth. Range positioning accuracy is also discussed, leading to a comprehensive treatment of all aspects of geometric accuracy. The system considered is an X-band SAR.
McDevitt, J T; Gurst, A H; Chen, Y
1998-01-01
We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693
Parametric Characterization of SGP4 Theory and TLE Positional Accuracy
NASA Astrophysics Data System (ADS)
Oltrogge, D.; Ramrath, J.
2014-09-01
Two-Line Elements, or TLEs, contain mean element state vectors compatible with General Perturbations (GP) singly-averaged semi-analytic orbit theory. This theory, embodied in the SGP4 orbit propagator, provides sufficient accuracy for some (but perhaps not all) orbit operations and SSA tasks. For more demanding tasks, higher accuracy orbit and force model approaches (i.e. Special Perturbations numerical integration or SP) may be required. In recent times, the suitability of TLEs or GP theory for any SSA analysis has been increasingly questioned. Meanwhile, SP is touted as being of high quality and well-suited for most, if not all, SSA applications. Yet the lack of truth or well-known reference orbits that haven't already been adopted for radar and optical sensor network calibration has typically prevented a truly unbiased assessment of such assertions. To gain better insight into the practical limits of applicability for TLEs, SGP4 and the underlying GP theory, the native SGP4 accuracy is parametrically examined for the statistically-significant range of RSO orbit inclinations experienced as a function of all orbit altitudes from LEO through GEO disposal altitude. For each orbit altitude, reference or truth orbits were generated using full force modeling, time-varying space weather, and AGIs HPOP numerical integration orbit propagator. Then, TLEs were optimally fit to these truth orbits. The resulting TLEs were then propagated and positionally differenced with the truth orbits to determine how well the GP theory was able to fit the truth orbits. Resultant statistics characterizing these empirically-derived accuracies are provided. This TLE fit process of truth orbits was intentionally designed to be similar to the JSpOC process operationally used to generate Enhanced GP TLEs for debris objects. This allows us to draw additional conclusions of the expected accuracies of EGP TLEs. In the real world, Orbit Determination (OD) programs aren't provided with dense optical
High accuracy broadband infrared spectropolarimetry
NASA Astrophysics Data System (ADS)
Krishnaswamy, Venkataramanan
Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.
NASA Technical Reports Server (NTRS)
Cabra, R.; Chen, J. Y.; Dibble, R. W.; Hamano, Y.; Karpetis, A. N.; Barlow, R. S.
2002-01-01
An experimental and numerical investigation is presented of a H2/N2 turbulent jet flame burner that has a novel vitiated coflow. The vitiated coflow emulates the recirculation region of most combustors, such as gas turbines or furnaces. Additionally, since the vitiated gases are coflowing, the burner allows for exploration of recirculation chemistry without the corresponding fluid mechanics of recirculation. Thus the vitiated coflow burner design facilitates the development of chemical kinetic combustion models without the added complexity of recirculation fluid mechanics. Scalar measurements are reported for a turbulent jet flame of H2/N2 in a coflow of combustion products from a lean ((empty set) = 0.25) H2/Air flame. The combination of laser-induced fluorescence, Rayleigh scattering, and Raman scattering is used to obtain simultaneous measurements of the temperature, major species, as well as OH and NO. Laminar flame calculation with equal diffusivity do agree when the premixing and preheating that occurs prior to flame stabilization is accounted for in the boundary conditions. Also presented is an exploratory pdf model that predicts the flame's axial profiles fairly well, but does not accurately predict the lift-off height.
Jacobsen, S.; Birkelund, Y.
2010-01-01
Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40–50%. PMID:21331362
Measuring the accuracy of agro-environmental indicators.
Makowski, David; Tichit, Muriel; Guichard, Laurence; Van Keulen, Herman; Beaudoin, Nicolas
2009-05-01
Numerous agro-environmental indicators have been developed by agronomists and ecologists during the last 20 years to assess the environmental impact of farmers' practices, and to monitor effects of agro-environmental policies. The objectives of this paper were (i) to measure the accuracy of a wide range of agro-environmental indicators from experimental data and (ii) to discuss the value of different information typically used by these indicators, i.e. information on farmers' practices, and on plant and soil characteristics. Four series of indicators were considered in this paper: indicators of habitat quality for grassland bird species, indicators of risk of disease in oilseed rape crops, indicators of risk of pollution by nitrogen fertilizer, and indicators of weed infestation. Several datasets were used to measure their accuracy in cultivated plots and in grasslands. The sensitivity, specificity, and probability of correctly ranking plots were estimated for each indicator. Our results showed that the indicators had widely varying levels of accuracy. Some show very poor performance and had no discriminatory ability. Other indicators were informative and performed better than random decisions. Among the tested indicators, the best ones were those using information on plant characteristics such as grass height, fraction of diseased flowers, or crop yield. The statistical method applied in this paper could support researchers, farm advisers, and decision makers in comparing various indicators. PMID:19128870
High-accuracy particle sizing by interferometric particle imaging
NASA Astrophysics Data System (ADS)
Qieni, Lü; Wenhua, Jin; Tong, Lü; Xiang, Wang; Yimo, Zhang
2014-02-01
A method of high-accuracy estimation of fringes number/fringes frequency of interferogram based on erosion match and the Fourier transform technique is proposed. The edge images of the interference pattern of particles and the particle mask image are detected respectively by erosion operating firstly and then subtracted with the respective original image, and the center coordinate of particles can be extracted through the 2D correlation operation for the two edge images obtained. The interference pattern of each particle can then be achieved using the center coordinate, the shape and size of the particle image. The number of fringes/fringe spacing of the interferogram of the particle is extracted by Fourier transform and the modified Rife algorithm, and sub-pixel accuracy of the extracted frequency is acquired. Its performance is demonstrated by numerical simulation and experimental measurement. The measurement uncertainty is ±0.91 μm and the relative error 1.13% for the standard particle of diameter 45 μm. The research results show that the algorithm presented boasts high accuracy for particle sizing as well as location measurement.
Numerical solutions of telegraph equations with the Dirichlet boundary condition
NASA Astrophysics Data System (ADS)
Ashyralyev, Allaberen; Turkcan, Kadriye Tuba; Koksal, Mehmet Emir
2016-08-01
In this study, the Cauchy problem for telegraph equations in a Hilbert space is considered. Stability estimates for the solution of this problem are presented. The third order of accuracy difference scheme is constructed for approximate solutions of the problem. Stability estimates for the solution of this difference scheme are established. As a test problem to support theoretical results, one-dimensional telegraph equation with the Dirichlet boundary condition is considered. Numerical solutions of this equation are obtained by first, second and third order of accuracy difference schemes.
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Numerical simulations in the development of propellant management devices
NASA Astrophysics Data System (ADS)
Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael
Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Rai, Man Mohan (Technical Monitor)
1994-01-01
This lecture attempts to illustrate the basic ideas of how the recent advances in nonlinear dynamical systems theory (dynamics) can provide new insights into the understanding of numerical algorithms used in solving nonlinear differential equations (DEs). Examples will be given of the use of dynamics to explain unusual phenomena that occur in numerics. The inadequacy of the use of linearized analysis for the understanding of long time behavior of nonlinear problems will be illustrated, and the role of dynamics in studying the nonlinear stability, accuracy, convergence property and efficiency of using time- dependent approaches to obtaining steady-state numerical solutions in computational fluid dynamics (CFD) will briefly be explained.
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Matsui, Toshihisa; Shi, Jainn J.; Tao, Wei-Kuo; Khain, Alexander P.; Hao, Arthur; Cifelli, Robert; Heymsfield, Andrew; Tokay, Ali
2012-01-01
Two distinct snowfall events are observed over the region near the Great Lakes during 19-23 January 2007 under the intensive measurement campaign of the Canadian CloudSat/CALIPSO validation project (C3VP). These events are numerically investigated using the Weather Research and Forecasting model coupled with a spectral bin microphysics (WRF-SBM) scheme that allows a smooth calculation of riming process by predicting the rimed mass fraction on snow aggregates. The fundamental structures of the observed two snowfall systems are distinctly characterized by a localized intense lake-effect snowstorm in one case and a widely distributed moderate snowfall by the synoptic-scale system in another case. Furthermore, the observed microphysical structures are distinguished by differences in bulk density of solid-phase particles, which are probably linked to the presence or absence of supercooled droplets. The WRF-SBM coupled with Goddard Satellite Data Simulator Unit (G-SDSU) has successfully simulated these distinctive structures in the three-dimensional weather prediction run with a horizontal resolution of 1 km. In particular, riming on snow aggregates by supercooled droplets is considered to be of importance in reproducing the specialized microphysical structures in the case studies. Additional sensitivity tests for the lake-effect snowstorm case are conducted utilizing different planetary boundary layer (PBL) models or the same SBM but without the riming process. The PBL process has a large impact on determining the cloud microphysical structure of the lake-effect snowstorm as well as the surface precipitation pattern, whereas the riming process has little influence on the surface precipitation because of the small height of the system.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
NASA Astrophysics Data System (ADS)
Imada, Masatoshi; Kashima, Tsuyoshi
2000-09-01
A numerical algorithm for studying strongly correlated electron systems is proposed. The groundstate wavefunction is projected out after a numerical renormalization procedure in the path integral formalism. The wavefunction is expressed from the optimized linear combination of retained states in the truncated Hilbert space with a numerically chosen basis. This algorithm does not suffer from the negative sign problem and can be applied to any type of Hamiltonian in any dimension. The efficiency is tested in examples of the Hubbard model where the basis of Slater determinants is numerically optimized. We show results on fast convergence and accuracy achieved with a small number of retained states.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
High accuracy time transfer synchronization
NASA Technical Reports Server (NTRS)
Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.
1995-01-01
In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Numerical valuation of discrete double barrier options
NASA Astrophysics Data System (ADS)
Milev, Mariyan; Tagliani, Aldo
2010-03-01
In the present paper we explore the problem for pricing discrete barrier options utilizing the Black-Scholes model for the random movement of the asset price. We postulate the problem as a path integral calculation by choosing approach that is similar to the quadrature method. Thus, the problem is reduced to the estimation of a multi-dimensional integral whose dimension corresponds to the number of the monitoring dates. We propose a fast and accurate numerical algorithm for its valuation. Our results for pricing discretely monitored one and double barrier options are in agreement with those obtained by other numerical and analytical methods in Finance and literature. A desired level of accuracy is very fast achieved for values of the underlying asset close to the strike price or the barriers. The method has a simple computer implementation and it permits observing the entire life of the option.
Optimal design of robot accuracy compensators
Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)
1993-12-01
The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.
Empathic Embarrassment Accuracy in Autism Spectrum Disorder.
Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G
2015-06-01
Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043
Increasing Accuracy in Environmental Measurements
NASA Astrophysics Data System (ADS)
Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst
2016-04-01
Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.
Landsat classification accuracy assessment procedures
Mead, R. R.; Szajgin, John
1982-01-01
A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.
Determining factors for the accuracy of DMRG in chemistry.
Keller, Sebastian F; Reiher, Markus
2014-01-01
The Density Matrix Renormalization Group (DMRG) algorithm has been a rising star for the accurate ab initio exploration of Born-Oppenheimer potential energy surfaces in theoretical chemistry. However, owing to its iterative numerical nature, pitfalls that can affect the accuracy of DMRG energies need to be circumvented. Here, after a brief introduction into this quantum chemical method, we discuss criteria that determine the accuracy of DMRG calculations. PMID:24983596
NASA Technical Reports Server (NTRS)
Li, Yi-Wei; Elishakoff, Isaac; Starnes, James H., Jr.; Bushnell, David
1998-01-01
This study is an extension of a previous investigation of the combined effect of axisymmetric thickness variation and axisymmetric initial geometric imperfection on buckling of isotropic shells under uniform axial compression. Here the anisotropic cylindrical shells are investigated by means of Koiter's energy criterion. An asymptotic formula is derived which can be used to determine the critical buckling load for composite shells with combined initial geometric imperfection and thickness variation. Results are compared with those obtained by the software packages BOSOR4 and PANDA2.
Entropy Splitting and Numerical Dissipation
NASA Technical Reports Server (NTRS)
Yee, H. C.; Vinokur, M.; Djomehri, M. J.
1999-01-01
A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock
NASA Astrophysics Data System (ADS)
Ito, Masakazu; Mito, Masaki; Deguchi, Hiroyuki; Takeda, Kazuyoshi
1994-03-01
The measurements of magnetic heat capacity and susceptibility of one-dimensional S=1 antiferromagnet (CH3)4NNi(NO2)3 (TMNIN) have been carried out in order to make comparison with the theoretical results of a quantum Monte Carlo method for the Haldane system. The results for the heat capacity, which show a broad maximum around 10 K, are well reproduced by the theory with the interaction J/k B=-12.0±1.0 K in the temperature range T>0.2\\mid J\\mid S(S+1)/k_B. The low temperature heat capacity exhibits an exponential decay with gap energy Δ/k B=5.3±0.2 K, which gives {\\mitΔ}=0.44\\mid J\\mid , in contrast to the linear dependence on temperature as in the case for half integer spin. The residual magnetic entropy below 0.7 K is estimated to be 0.07% of Nk B ln 3, which denies the possibility of three-dimensional ordering of the spin system at lower temperatures. The observed susceptibility also agrees with the theory with J/k B=-10.9 K and g=2.02 in the whole temperature region, when we take the effect from the finite length of the chains into consideration.
NASA Astrophysics Data System (ADS)
Beniaiche, Ahmed; Ghenaiet, Adel; Facchini, Bruno
2016-05-01
The aero-thermal behavior of the flow field inside 30:1 scaled model reproducing an innovative smooth trailing edge of shaped wedge discharge duct with one row of enlarged pedestals have been investigated in order to determine the effect of rotation, inlet velocity and blowing conditions effects, for Re = 20,000 and 40,000 and Ro = 0-0.23. Two configurations are presented: with and without open tip configurations. Thermo-chromic liquid crystals technique is used to ensure a local measurement of the heat transfer coefficient on the blade suction side under stationary and rotation conditions. Results are reported in terms of detailed 2D HTC maps on the suction side surface as well as the averaged Nusselt number inside the pedestal ducts. Two correlations are proposed, for both closed and open tip configurations, based on the Re, Pr, Ro and a new non-dimensional parameter based on the position along the radial distance, to assess a reliable estimation of the averaged Nusselt number at the inter-pedestal region. A good agreement is found between prediction and experimental data with about ±10 to ±12 % of uncertainty, for the simple form correlation, and about ±16 % using a complex form. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.
Design and analysis of a high-accuracy flexure hinge.
Liu, Min; Zhang, Xianmin; Fatikow, Sergej
2016-05-01
This paper designs and analyzes a new kind of flexure hinge obtained by using a topology optimization approach, namely, a quasi-V-shaped flexure hinge (QVFH). Flexure hinges are formed by three segments: the left and right segments with convex shapes and the middle segment with straight line. According to the results of topology optimization, the curve equations of profiles of the flexure hinges are developed by numerical fitting. The in-plane dimensionless compliance equations of the flexure hinges are derived based on Castigliano's second theorem. The accuracy of rotation, which is denoted by the compliance of the center of rotation that deviates from the midpoint, is derived. The equations for evaluating the maximum stresses are also provided. These dimensionless equations are verified by finite element analysis and experimentation. The analytical results are within 8% uncertainty compared to the finite element analysis results and within 9% uncertainty compared to the experimental measurement data. Compared with the filleted V-shaped flexure hinge, the QVFH has a higher accuracy of rotation and better ability of preserving the center of rotation position but smaller compliance. PMID:27250469
Sheets, Rodney A.; Dumouchelle, Denise H.; Feinstein, Daniel T.
2005-01-01
Agreements between United States governors and Canadian territorial premiers establish water-management principles and a framework for protecting Great Lakes waters, including ground water, from diversion and consumptive uses. The issue of ground-water diversions out of the Great Lakes Basin by large-scale pumping near the divides has been raised. Two scenario models, in which regional ground-water flow models represent major aquifers in the Great Lakes region, were used to assess the effect of pumping near ground-water divides. The regional carbonate aquifer model was a generalized model representing northwestern Ohio and northeastern Indiana; the regional sandstone aquifer model used an existing calibrated ground-water flow model for southeastern Wisconsin. Various well locations and pumping rates were examined. Although the two models have different frameworks and boundary conditions, results of the models were similar. There was significant diversion of ground water across ground-water divides due to pumping within 10 miles of the divides. In the regional carbonate aquifer model, the percentage of pumped water crossing the divide ranges from about 20 percent for a well 10 miles from the divide to about 50 percent for a well adjacent to the divide. In the regional sandstone aquifer model, the percentages range from about 30 percent for a well 10 miles from the divide to about 50 percent for a well adjacent to the divide; when pumping on the west side of the divide, within 5 mi of the predevelopment divide, results in at least 10 percent of the water being diverted from the east side of the divide. Two additional scenario models were done to examine the effects of pumping near rivers. Transient models were used to simulate a rapid stage rise in a river during pumping at a well in carbonate and glacial aquifers near the river. Results of water-budget analyses indicate that induced infiltration, captured streamflow, and underflow were important for both glacial and
NASA Astrophysics Data System (ADS)
Rincón, Luis; Alvarellos, J. E.; Almeida, Rafael
2005-06-01
In this work we have analyzed the bond character of a series of representative diatomic molecules, using valence bond and the atoms in molecules points of view. This is done using generalized valence-bond calculations. We have also employed more exigent levels, as configuration interaction with single and double excitations and complete active space self-consistent field calculations, in order to validate the generalized valence-bond results. We have explored the possibility that the known delocalization index, and a parameter that measures the excess or defect population within a given atomic basin, can be considered as indicators of the character of bond interaction. We conclude that for a proper description of the bond character, the global behavior of both the charge and two-electron densities should be considered.
Numerical Integration: One Step at a Time
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…
NASA Astrophysics Data System (ADS)
Matang, Rex A. S.; Owens, Kay
2014-09-01
The Government of Papua New Guinea undertook a significant step in developing curriculum reform policy that promoted the use of Indigenous knowledge systems in teaching formal school subjects in any of the country's 800-plus Indigenous languages. The implementation of the Elementary Cultural Mathematics Syllabus is in line with the above curriculum emphasis. Given the aims of the reform, the research reported here investigated the influence of children's own mother tongue (Tok Ples) and traditional counting systems on their development of early number knowledge formally taught in schools. The study involved 272 school children from 22 elementary schools in four provinces. Each child participated in a task-based assessment interview focusing on eight task groups relating to early number knowledge. The results obtained indicate that, on average, children learning their traditional counting systems in their own language spent shorter time and made fewer mistakes in solving each task compared to those taught without Tok Ples (using English and/or the lingua franca, Tok Pisin). Possible reasons accounting for these differences are also discussed.
Towards Experimental Accuracy from the First Principles
NASA Astrophysics Data System (ADS)
Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.
2013-06-01
Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).
Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration
Masalma, Yahya; Jiao, Yu
2010-10-01
We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.
Accuracy of magnetic energy computations
NASA Astrophysics Data System (ADS)
Valori, G.; Démoulin, P.; Pariat, E.; Masson, S.
2013-05-01
Context. For magnetically driven events, the magnetic energy of the system is the prime energy reservoir that fuels the dynamical evolution. In the solar context, the free energy (i.e., the energy in excess of the potential field energy) is one of the main indicators used in space weather forecasts to predict the eruptivity of active regions. A trustworthy estimation of the magnetic energy is therefore needed in three-dimensional (3D) models of the solar atmosphere, e.g., in coronal fields reconstructions or numerical simulations. Aims: The expression of the energy of a system as the sum of its potential energy and its free energy (Thomson's theorem) is strictly valid when the magnetic field is exactly solenoidal. For numerical realizations on a discrete grid, this property may be only approximately fulfilled. We show that the imperfect solenoidality induces terms in the energy that can lead to misinterpreting the amount of free energy present in a magnetic configuration. Methods: We consider a decomposition of the energy in solenoidal and nonsolenoidal parts which allows the unambiguous estimation of the nonsolenoidal contribution to the energy. We apply this decomposition to six typical cases broadly used in solar physics. We quantify to what extent the Thomson theorem is not satisfied when approximately solenoidal fields are used. Results: The quantified errors on energy vary from negligible to significant errors, depending on the extent of the nonsolenoidal component of the field. We identify the main source of errors and analyze the implications of adding a variable amount of divergence to various solenoidal fields. Finally, we present pathological unphysical situations where the estimated free energy would appear to be negative, as found in some previous works, and we identify the source of this error to be the presence of a finite divergence. Conclusions: We provide a method of quantifying the effect of a finite divergence in numerical fields, together with
Bayesian reclassification statistics for assessing improvements in diagnostic accuracy.
Huang, Zhipeng; Li, Jialiang; Cheng, Ching-Yu; Cheung, Carol; Wong, Tien-Yin
2016-07-10
We propose a Bayesian approach to the estimation of the net reclassification improvement (NRI) and three versions of the integrated discrimination improvement (IDI) under the logistic regression model. Both NRI and IDI were proposed as numerical characterizations of accuracy improvement for diagnostic tests and were shown to retain certain practical advantage over analysis based on ROC curves and offer complementary information to the changes in area under the curve. Our development is a new contribution towards Bayesian solution for the estimation of NRI and IDI, which eases computational burden and increases flexibility. Our simulation results indicate that Bayesian estimation enjoys satisfactory performance comparable with frequentist estimation and achieves point estimation and credible interval construction simultaneously. We adopt the methodology to analyze a real data from the Singapore Malay Eye Study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26875442
Accuracy of an estuarine hydrodynamic model using smooth elements
Walters, Roy A.; Cheng, Ralph T.
1980-01-01
A finite element model which uses triangular, isoparametric elements with quadratic basis functions for the two velocity components and linear basis functions for water surface elevation is used in the computation of shallow water wave motions. Specifically addressed are two common uncertainties in this class of two-dimensional hydrodynamic models: the treatment of the boundary conditions at open boundaries and the treatment of lateral boundary conditions. The accuracy of the models is tested with a set of numerical experiments in rectangular and curvilinear channels with constant and variable depth. The results indicate that errors in velocity at the open boundary can be significant when boundary conditions for water surface elevation are specified. Methods are suggested for minimizing these errors. The results also show that continuity is better maintained within the spatial domain of interest when ‘smooth’ curve-sided elements are used at shoreline boundaries than when piecewise linear boundaries are used. Finally, a method for network development is described which is based upon a continuity criterion to gauge accuracy. A finite element network for San Francisco Bay, California, is used as an example.
Numerical calculation of the rock permittivity using micro computerized tomography image
NASA Astrophysics Data System (ADS)
Guo, Chen; Liu, Richard; Jin, Zhao; He, Zhili
2014-05-01
A numerical evaluation of the permittivity of sandstones through the micro computerized tomography (micro CT) images at 1.1 GHz is conducted by using an image porosity extracting algorithm and an improved Finite Difference Method (FDM). Within the acquired physical properties by 3D micro CT scanning, numerical method is used to compute the permittivity of the rock samples. A resonant cavity is used for experimental measurement. The simulated results of 2 clastic sandstone samples with dry state and saturated state are compared with experimental data for validating the accuracy of the proposed numerical method. The results show great agreement and the error of permittivity evaluation is less than 3%.
Conservative model and numerical simulations of compressible two-phase pipe flows
NASA Astrophysics Data System (ADS)
Belozerov, A.; Romenski, E.; Lebedeva, N.
2016-06-01
The two-phase two-pressure model for transient one-dimensional compressible pipe flow is considered. Governing equations of the model form a hyperbolic system of conservation laws. The Runge-Kutta-WENO method providing accuracy of the 3rd order in time and 5th order in space is implemented. Numerical results for several test problems are presented.
Decreased interoceptive accuracy following social exclusion.
Durlik, Caroline; Tsakiris, Manos
2015-04-01
The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592
Training in timing improves accuracy in golf.
Libkuman, Terry M; Otani, Hajime; Steger, Neil
2002-01-01
In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497
Assessing the Accuracy of the Precise Point Positioning Technique
NASA Astrophysics Data System (ADS)
Bisnath, S. B.; Collins, P.; Seepersad, G.
2012-12-01
The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper
2014-01-01
Background Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury. Methods Study Design: Systematic review. The review protocol was registered through PROSPERO (CRD42012002069). Electronic databases (PubMed, MEDLINE, EMBASE, CINAHL) were searched up to 19th of June 2013 to identify diagnostic studies comparing the accuracy of clinical tests for ACL injury to an acceptable reference standard (arthroscopy, arthrotomy, or MRI). Risk of bias was appraised using the QUADAS-2 checklist. Index test accuracy was evaluated using a descriptive analysis of paired likelihood ratios and displayed as forest plots. Results A total of 285 full-text articles were assessed for eligibility, from which 14 studies were included in this review. Included studies were deemed to be clinically and statistically heterogeneous, so a meta-analysis was not performed. Nine clinical tests from the history (popping sound at time of injury, giving way, effusion, pain, ability to continue activity) and four from physical examination (anterior draw test, Lachman’s test, prone Lachman’s test and pivot shift test) were investigated for diagnostic accuracy. Inspection of positive and negative likelihood ratios indicated that none of the individual tests provide useful diagnostic information in a clinical setting. Most studies were at risk of bias and reported imprecise estimates of diagnostic accuracy. Conclusion Despite being widely used and accepted in clinical practice, the results of individual history items or physical tests do not meaningfully change the probability of ACL injury. In contrast combinations of tests have higher diagnostic accuracy; however the most accurate combination of clinical tests remains an area for future research. Clinical relevance Clinicians should be aware of the limitations associated
NASA Technical Reports Server (NTRS)
Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.
1973-01-01
Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.
Asymptotic accuracy of two-class discrimination
Ho, T.K.; Baird, H.S.
1994-12-31
Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.
Accuracy assessment system and operation
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)
1979-01-01
The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.
The accuracy of automatic tracking
NASA Technical Reports Server (NTRS)
Kastrov, V. V.
1974-01-01
It has been generally assumed that tracking accuracy changes similarly to the rate of change of the curve of the measurement conversion. The problem that internal noise increases along with the signals processed by the tracking device and that tracking accuracy thus drops were considered. The main prerequisite for solution is consideration of the dependences of the output signal of the tracking device sensor not only on the measured parameter but on the signal itself.
On the accuracy of close stellar approaches determination
NASA Astrophysics Data System (ADS)
Dybczyński, Piotr A.; Berski, Filip
2015-05-01
The aim of this paper is to demonstrate the accuracy of our knowledge of close stellar passage distances in the pre-Gaia era. We used the most precise astrometric and kinematic data available at the moment and prepared a list of 40 stars nominally passing (in the past or future) closer than 2 pc from the Sun. We used a full gravitational potential of the Galaxy to calculate the motion of the Sun and a star from their current positions to the proximity epoch. For these calculations, we used a numerical integration in rectangular, Galactocentric coordinates. We showed that in many cases the numerical integration of the star motion gives significantly different results than popular rectilinear approximation. We found several new stellar candidates for close visitors in past or in future. We used covariance matrices of the astrometric data for each star to estimate the accuracy of the obtained proximity distance and epoch. To this aim, we used a Monte Carlo method, replaced each star with 10 000 of its clones and studied the distribution of their individual close passages near the Sun. We showed that for contemporary close neighbours the precision is quite good, but for more distant stars it strongly depends on the quality of astrometric and kinematic data. Several examples are discussed in detail, among them the case of HIP 14473. For this star, we obtained the nominal proximity distance as small as 0.22 pc 3.78 Myr ago. However, there exists strong need for more precise astrometry of this star since the proximity point uncertainty is unacceptably large.
Accuracy of Information Processing under Focused Attention.
ERIC Educational Resources Information Center
Bastick, Tony
This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…
Accuracy of polyp localization at colonoscopy
O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald
2016-01-01
Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of
Experimental and Numerical Studies of Oceanic Overflow
NASA Astrophysics Data System (ADS)
Gibson, Thomas; Hohman, Fred; Morrison, Theresa; Reckinger, Shanon; Reckinger, Scott
2014-11-01
Oceanic overflows occur when dense water flows down a continental slope into less dense ambient water. The resulting density driven plumes occur naturally in various regions of the global ocean and affect the large-scale circulation. General circulation models currently rely on parameterizations for representing dense overflows due to resolution restrictions. The work presented here involves a direct qualitative and quantitative comparison between physical laboratory experiments and lab-scale numerical simulations. Laboratory experiments are conducted using a rotating square tank customized for idealized overflow and a high-resolution camera mounted on the table in the rotating reference frame for data collection. Corresponding numerical simulations are performed using the MIT general circulation model (MITgcm) run in the non-hydrostatic configuration. Resolution and numerical parameter studies are presented to ensure accuracy of the simulation. Laboratory and computational experiments are compared across a wide range of physical parameters, including Coriolis parameter, inflow density anomaly, and dense inflow volumetric flow rate. The results are analyzed using various calculated metrics, such as the plume velocity. Funding for this project is provided by the National Science Foundation.
Numerical Speed of Sound and its Application to Schemes for all Speeds
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Edwards, Jack R.
1999-01-01
The concept of "numerical speed of sound" is proposed in the construction of numerical flux. It is shown that this variable is responsible for the accurate resolution of' discontinuities, such as contacts and shocks. Moreover, this concept can he readily extended to deal with low speed and multiphase flows. As a results, the numerical dissipation for low speed flows is scaled with the local fluid speed, rather than the sound speed. Hence, the accuracy is enhanced the correct solution recovered, and the convergence rate improved. We also emphasize the role of mass flux and analyze the behavior of this flux. Study of mass flux is important because the numerical diffusivity introduced in it can be identified. In addition, it is the term common to all conservation equations. We show calculated results for a wide variety of flows to validate the effectiveness of using the numerical speed of sound concept in constructing the numerical flux. We especially aim at achieving these two goals: (1) improving accuracy and (2) gaining convergence rates for all speed ranges. We find that while the performance at high speed range is maintained, the flux now has the capability of performing well even with the low: speed flows. Thanks to the new numerical speed of sound, the convergence is even enhanced for the flows outside of the low speed range. To realize the usefulness of the proposed method in engineering problems, we have also performed calculations for complex 3D turbulent flows and the results are in excellent agreement with data.
Comprehensive study of numerical anisotropy and dispersion in 3-D TLM meshes
NASA Astrophysics Data System (ADS)
Berini, Pierre; Wu, Ke
1995-05-01
This paper presents a comprehensive analysis of the numerical anisotropy and dispersion of 3-D TLM meshes constructed using several generalized symmetrical condensed TLM nodes. The dispersion analysis is performed in isotropic lossless, isotropic lossy and anisotropic lossless media and yields a comparison of the simulation accuracy for the different TLM nodes. The effect of mesh grading on the numerical dispersion is also determined. The results compare meshes constructed with Johns' symmetrical condensed node (SCN), two hybrid symmetrical condensed nodes (HSCN) and two frequency domain symmetrical condensed nodes (FDSCN). It has been found that under certain circumstances, the time domain nodes may introduce numerical anisotropy when modelling isotropic media.
Two Different Methods for Numerical Solution of the Modified Burgers' Equation
Karakoç, Seydi Battal Gazi; Başhan, Ali; Geyikli, Turabi
2014-01-01
A numerical solution of the modified Burgers' equation (MBE) is obtained by using quartic B-spline subdomain finite element method (SFEM) over which the nonlinear term is locally linearized and using quartic B-spline differential quadrature (QBDQM) method. The accuracy and efficiency of the methods are discussed by computing L 2 and L ∞ error norms. Comparisons are made with those of some earlier papers. The obtained numerical results show that the methods are effective numerical schemes to solve the MBE. A linear stability analysis, based on the von Neumann scheme, shows the SFEM is unconditionally stable. A rate of convergence analysis is also given for the DQM. PMID:25162064
A numerical procedure for predicting creep and delayed failures in laminated composites
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Brinson, H. F.
1983-01-01
A numerical procedure is described for predicting the viscoelastic response of general laminates. A nonlinear compliance model is used to predict the creep response of the individual laminae. A biaxial delayed failure model predicts ply failure. The numerical procedure, based on lamination theory, increases by increments through time to predict creep compliance and delayed failures in laminates. Numerical stability problems and experimental verification are discussed. Although the program has been quite successful in predicting creep of general laminates, the assumptions associated with lamination theory have resulted in erroneous bounds on the predicted material response. Delayed failure predictions have been conservative. Several improvements are suggested to increase the accuracy of the procedure.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
A method for generating numerical pilot opinion ratings using the optimal pilot model
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
A method for generating numerical pilot opinion ratings using the optimal pilot model is introduced. The method is contained in a rating hypothesis which states that the numerical rating which a human pilot assigns to a specific vehicle and task can be directly related to the numerical value of the index of performance resulting from the optimal pilot modeling procedure as applied to that vehicle and task. The hypothesis is tested using the data from four piloted simulations. The results indicate that the hypothesis is reasonable, but that the predictive capability of the method is a strong function of the accuracy of the pilot model itself. This accuracy is, in turn, dependent upon the parameters which define the optimal modeling problem. A procedure for specifying the parameters for the optimal pilot model in the absence of experimental data is suggested.
Simulation of a numerical filter for enhancing earth radiation budget measurements
NASA Technical Reports Server (NTRS)
Green, R. N.
1981-01-01
The Earth Radiation Budget Experiment has the objective to collect the radiation budget data which are needed to determine the radiation budget at the top of the atmosphere (TOA) on a regional scale. A second objective is to determine the accuracy of the results. Three satellites will carry wide and medium field of view radiometers which measure the longwave and shortwave components of radiation. Scanning radiometers will be included to detect small spatial features. A proposal has been made to employ for the nonscanning radiometers a one-dimensional numerical filter which reduces satellite measurements to TOA radiant excitances. The numerical filter was initially formulated by House (1980). It enhances the resolution of the radiation budget along the satellite groundtrack. The accuracy of the numerical filter estimate is studied by simulating the data gathering and measurement inversion process. The results of the study are discussed, taking into account two error sources.
High-accuracy deterministic solution of the Boltzmann equation for the shock wave structure
NASA Astrophysics Data System (ADS)
Malkov, E. A.; Bondar, Ye. A.; Kokhanchik, A. A.; Poleshkin, S. O.; Ivanov, M. S.
2015-07-01
A new deterministic method of solving the Boltzmann equation has been proposed. The method has been employed in numerical studies of the plane shock wave structure in a hard sphere gas. Results for Mach numbers and have been compared with predictions of the direct simulation Monte Carlo (DSMC) method, which has been used to obtain the reference solution. Particular attention in estimating the solution accuracy has been paid to a fine structural effect: the presence of a total temperature peak exceeding the temperature value further downstream. The results of solving the Boltzmann equation for the shock wave structure are in excellent agreement with the DSMC predictions.
NLOS UV channel modeling using numerical integration and an approximate closed-form path loss model
NASA Astrophysics Data System (ADS)
Gupta, Ankit; Noshad, Mohammad; Brandt-Pearce, Maïté
2012-10-01
In this paper we propose a simulation method using numerical integration, and develop a closed-form link loss model for physical layer channel characterization for non-line of sight (NLOS) ultraviolet (UV) communication systems. The impulse response of the channel is calculated by assuming both uniform and Gaussian profiles for transmitted beams and different geometries. The results are compared with previously published results. The accuracy of the integration approach is compared to the Monte Carlo simulation. Then the path loss using the simulation method and the suggested closed-form expression are presented for different link geometries. The accuracies are evaluated and compared to the results obtained using other methods.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Robustness versus accuracy in shock-wave computations
NASA Astrophysics Data System (ADS)
Gressier, Jérémie; Moschetta, Jean-Marc
2000-06-01
Despite constant progress in the development of upwind schemes, some failings still remain. Quirk recently reported (Quirk JJ. A contribution to the great Riemann solver debate. International Journal for Numerical Methods in Fluids 1994; 18: 555-574) that approximate Riemann solvers, which share the exact capture of contact discontinuities, generally suffer from such failings. One of these is the odd-even decoupling that occurs along planar shocks aligned with the mesh. First, a few results on some failings are given, namely the carbuncle phenomenon and the kinked Mach stem. Then, following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme (Roe PL, Approximate Riemann solvers, parameters vectors, and difference schemes, Journal of Computational Physics 1981; 43: 357-372), the Equilibrium Flux Method (Pullin DI, Direct simulation methods for compressible inviscid ideal gas flow, Journal of Computational Physics 1980; 34: 231-244), the Equilibrium Interface Method (Macrossan MN, Oliver. RI, A kinetic theory solution method for the Navier-Stokes equations, International Journal for Numerical Methods in Fluids 1993; 17: 177-193) and the AUSM scheme (Liou MS, Steffen CJ, A new flux splitting scheme, Journal of Computational Physics 1993; 107: 23-39). Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established. Copyright
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-28
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales. PMID:27250297
Dynamics of a compound vesicle: numerical simulations
NASA Astrophysics Data System (ADS)
Veerapaneni, Shravan; Young, Yuan-Nan; Vlahovska, Petia; Blawzdziewicz, Jerzy
2010-11-01
Vesicles (self-enclosing lipid membranes) in simple linear flows are known to exhibit rich dynamics such as tank-treading, tumbling, trembling (swinging), and vacillating breathing. Recently, vesicles have been used as a multi-functional platform for drug-delivery. In this work, the dynamics of simplified models for such compound vesicles is investigated numerically using a state-of-the-art boundary-integral code that has been validated with high accuracy and efficiency. Results show that for a vesicle enclosing a rigid particle in a simple shear flow, transition from tank-treading to tumbling is possible even in the absence of viscosity mismatch in the interior and exterior fluids. We will discuss the shape transformations, multiple particle interactions and the flow properties. Comparison with results from analytical modeling gives insights to the underlying physics for such novel dynamics.
Calculation Of The Nanbu-Trubnikov Kernel: Implications For Numerical Modeling Of Coulomb Collisions
Dimits, A; Cohen, B I; Wang, C; Caflisch, R; Huang, Y
2009-07-02
We investigate the accuracy of and assumptions underlying the numerical binary Monte-Carlo collision operator due to Nanbu [K. Nanbu, Phys. Rev. E 55 (1997)]. The numerical experiments that resulted in Nanbu's parameterized collision kernel are approximate realizations of the Coulomb-Lorentz pitch-angle scattering process, for which an analytical solution is available. It is demonstrated empirically that Nanbu's collision operator quite accurately recovers the effects of Coulomb-Lorentz pitch-angle collisions, or processes that approximate these even for very large values of the collisional time step. An investigation of the analytical solution shows that Nanbu's parameterized kernel is highly accurate for small values of the normalized collision time step, but loses some of its accuracy for larger values of the time step. Finally, a practical collision algorithm is proposed that for small-mass-ratio Coulomb collisions improves on the accuracy of Nanbu's algorithm.
Numerical solution of boundary-integral equations for molecular electrostatics.
Bardhan, Jaydeep P
2009-03-01
Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived. PMID:19275391
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032
Frontiers in Numerical Relativity
NASA Astrophysics Data System (ADS)
Evans, Charles R.; Finn, Lee S.; Hobill, David W.
2011-06-01
Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics
Current Concept of Geometrical Accuracy
NASA Astrophysics Data System (ADS)
Görög, Augustín; Görögová, Ingrid
2014-06-01
Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.
Dynamic stiffness removal for direct numerical simulations
Lu, Tianfeng; Law, Chung K.; Yoo, Chun Sang; Chen, Jacqueline H.
2009-08-15
A systematic approach was developed to derive non-stiff reduced mechanisms for direct numerical simulations (DNS) with explicit integration solvers. The stiffness reduction was achieved through on-the-fly elimination of short time-scales induced by two features of fast chemical reactivity, namely quasi-steady-state (QSS) species and partial-equilibrium (PE) reactions. The sparse algebraic equations resulting from QSS and PE approximations were utilized such that the efficiency of the dynamic stiffness reduction is high compared with general methods of time-scale reduction based on Jacobian decomposition. Using the dimension reduction strategies developed in our previous work, a reduced mechanism with 52 species was first derived from a detailed mechanism with 561 species. The reduced mechanism was validated for ignition and extinction applications over the parameter range of equivalence ratio between 0.5 and 1.5, pressure between 10 and 50 atm, and initial temperature between 700 and 1600 K for ignition, and worst-case errors of approximately 30% were observed. The reduced mechanism with dynamic stiffness removal was then applied in homogeneous and 1-D ignition applications, as well as a 2-D direct numerical simulation of ignition with temperature inhomogeneities at constant volume with integration time-steps of 5-10 ns. The integration was numerically stable and good accuracy was achieved. (author)
Audiovisual biofeedback improves motion prediction accuracy
Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho
2013-01-01
Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion
High accuracy fine-pointing system - Breadboard performances and results
NASA Astrophysics Data System (ADS)
Fazilleau, Y.; Moreau, B.; Betermier, J. M.; Boutemy, J. C.
A fine pointing system designed according to the requirements of the Semiconductor Laser Intersatellite Link Experiment 1989 (SILEX 1989) is described, with particular attention given to the synthesis of the final breadboarding. The study includes all the pointing functions where the pointing, acquisition, and tracking (PAT) functions are associated with different FOVs. The laboratory model consists of a complete pointing system with two CCD sensors for detection, two general-scanning single-axis actuators, and the overall control electronics. Each major PAT function of the laboratory model was separately tested, giving all the major impacts for the future PAT applications concerning mechanical margins, optical aberrations, sensor linearity, and servoloop communications.
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by
Guiding Center Equations of High Accuracy
R.B. White, G. Spizzo and M. Gobbin
2013-03-29
Guiding center simulations are an important means of predicting the effect of resistive and ideal magnetohydrodynamic instabilities on particle distributions in toroidal magnetically confined thermonuclear fusion research devices. Because saturated instabilities typically have amplitudes of δ B/B of a few times 10-4 numerical accuracy is of concern in discovering the effect of mode particle resonances. We develop a means of following guiding center orbits which is greatly superior to the methods currently in use. In the presence of ripple or time dependent magnetic perturbations both energy and canonical momentum are conserved to better than one part in 1014, and the relation between changes in canonical momentum and energy is also conserved to very high order.
Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen
2014-01-01
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282
Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen
2014-01-01
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282
Seasonal Effects on GPS PPP Accuracy
NASA Astrophysics Data System (ADS)
Saracoglu, Aziz; Ugur Sanli, D.
2016-04-01
GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.
Navigation Accuracy Guidelines for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Alfriend, Kyle T.
2004-01-01
Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.
Nationwide forestry applications program. Analysis of forest classification accuracy
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)
1981-01-01
The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.
Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment
NASA Technical Reports Server (NTRS)
Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.
2012-01-01
Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.
ACCURACY AND TRACE ORGANIC ANALYSES
Accuracy in trace organic analysis presents a formidable problem to the residue chemist. He is confronted with the analysis of a large number and variety of compounds present in a multiplicity of substrates at levels as low as parts-per-trillion. At these levels, collection, isol...
The hidden KPI registration accuracy.
Shorrosh, Paul
2011-09-01
Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052
Psychology Textbooks: Examining Their Accuracy
ERIC Educational Resources Information Center
Steuer, Faye B.; Ham, K. Whitfield, II
2008-01-01
Sales figures and recollections of psychologists indicate textbooks play a central role in psychology students' education, yet instructors typically must select texts under time pressure and with incomplete information. Although selection aids are available, none adequately address the accuracy of texts. We describe a technique for sampling…
ERIC Educational Resources Information Center
Soltesz, Fruzsina; Goswami, Usha; White, Sonia; Szucs, Denes
2011-01-01
Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the…
NASA Astrophysics Data System (ADS)
Yang, Ping; Feng, Xue-Wen; Liang, Wen-Jun; Wu, Kai-Su
2015-02-01
It is the main aim of this paper to investigate the numerical solutions of the inverse black body radiation problems. The inverse black body radiation problem is ill-posed. Using Gaussian-Laguerre integral formula which is a higher accuracy numerical integration formula with less node numbers to approximate the integral item of black body radiation equation, the black radiation equation is converted into a group of lower dimension algebraic equations. To solve the lower dimension algebraic equation, it only needs to use common Tikhonov regularization methods. The regularization parameter is chosen by using L-curve. Our method reduces the complexity of the algorithm, so the operability of our method is enhanced. Numerical results show that our algorithm is simple and effective, and has better calculation accuracy at the same time.
Wang, Heng; Wu, Jianan; Zhuo, Zihan; Tang, Jintian
2016-04-29
In order to ensure the safety and effectiveness of magnetic induction hyperthermia in clinical applications, numerical simulations on the temperature distributions and extent of thermal damage to the targeted regions must be conducted in the preoperative treatment planning system. In this paper, three models, including a thermoseed thermogenesis model, tissue heat transfer model, and tissue thermal damage model, were established based on the four-dimensional energy field, temperature field, and thermal damage field distributions exhibited during hyperthermia. In addition, a numerical simulation study was conducted using the Finite Volume Method (FVM), and the accuracy and reliability of the magnetic induction hyperthermia model and its numerical calculations were verified using computer simulations and experimental results. Thus, this study promoted the application of computing methods to magnetic induction therapy and conformal hyperthermia, and improved the accuracy of the temperature field and tissue thermal damage distribution predictions. PMID:27198462
Improved accuracies for satellite tracking
NASA Technical Reports Server (NTRS)
Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.
1991-01-01
A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.
MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS
Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...
Spatial and numerical processing in children with high and low visuospatial abilities.
Crollen, Virginie; Noël, Marie-Pascale
2015-04-01
In the literature on numerical cognition, a strong association between numbers and space has been repeatedly demonstrated. However, only a few recent studies have been devoted to examine the consequences of low visuospatial abilities on calculation processing. In this study, we wanted to investigate whether visuospatial weakness may affect pure spatial processing as well as basic numerical reasoning. To do so, the performances of children with high and low visuospatial abilities were directly compared on different spatial tasks (the line bisection and Simon tasks) and numerical tasks (the number bisection, number-to-position, and numerical comparison tasks). Children from the low visuospatial group presented the classic Simon and SNARC (spatial numerical association of response codes) effects but showed larger deviation errors as compared with the high visuospatial group. Our results, therefore, demonstrated that low visuospatial abilities did not change the nature of the mental number line but rather led to a decrease in its accuracy. PMID:25618380
Chang, Hung-Tzu; Cheng, Yuan-Chung; Zhang, Pan-Pan
2013-12-14
The small polaron quantum master equation (SPQME) proposed by Jang et al. [J. Chem. Phys. 129, 101104 (2008)] is a promising approach to describe coherent excitation energy transfer dynamics in complex molecular systems. To determine the applicable regime of the SPQME approach, we perform a comprehensive investigation of its accuracy by comparing its simulated population dynamics with numerically exact quasi-adiabatic path integral calculations. We demonstrate that the SPQME method yields accurate dynamics in a wide parameter range. Furthermore, our results show that the accuracy of polaron theory depends strongly upon the degree of exciton delocalization and timescale of polaron formation. Finally, we propose a simple criterion to assess the applicability of the SPQME theory that ensures the reliability of practical simulations of energy transfer dynamics with SPQME in light-harvesting systems.
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta; Berkelbach, Timothy C.; Reichman, David R.
2016-04-01
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin-boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we present is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques.
Accuracy of a bistatic scattering substitution technique for calibration of focused receivers
Rich, Kyle T.; Mast, T. Douglas
2015-01-01
A recent method for calibrating single-element, focused passive cavitation detectors (PCD) compares bistatic scattering measurements by the PCD and a reference hydrophone. Here, effects of scatterer properties and PCD size on frequency-dependent receive calibration accuracy are investigated. Simulated scattering from silica and polystyrene spheres was compared for small hydrophone and spherically focused PCD receivers to assess the achievable calibration accuracy as a function of frequency, scatterer size, and PCD size. Good agreement between measurements was found when the scatterer diameter was sufficiently smaller than the focal beamwidth of the PCD; this relationship was dependent on the scatterer material. For conditions that result in significant disagreement between measurements, the numerical methods described here can be used to correct experimental calibrations. PMID:26627816
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics.
Chen, Hsing-Ta; Berkelbach, Timothy C; Reichman, David R
2016-04-21
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin-boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we present is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques. PMID:27389208
A novel ZePoC encoder for sinusoidal signals with a predictable accuracy for an AC power standard
NASA Astrophysics Data System (ADS)
Vennemann, T.; Frye, T.; Liu, Z.; Kahmann, M.; Mathis, W.
2015-11-01
In this paper we present an analytical formulation of a Zero Position Coding (ZePoC) encoder for an AC power standard based on class-D topologies. For controlling a class-D power stage a binary signal with special spectral characteristics will be generated by this ZePoC encoder for sinusoidal signals. These spectral characteristics have a predictable accuracy within a separated baseband to keep the noise floor below a specified level. Simulation results will validate the accuracy of this novel ZePoC encoder. For a real-time implementation of the encoder on a DSP/FPGA hardware architecture a trade-off between accuracy and speed of the ZePoC algorithm has to be made. Therefore the numerical effects of different floating point formats will be analyzed.
The numerical analysis of a turbulent compressible jet
NASA Astrophysics Data System (ADS)
Debonis, James Raymond
2000-10-01
A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Sub-grid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two and three dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and sub-grid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved sub-grid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately ½Dj. Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 Uj.
NASA Technical Reports Server (NTRS)
Chakravarthy, S. R.; Osher, S.
1985-01-01
A new family of high accuracy Total Variation Diminishing (TVD) schemes has been developed. Members of the family include the conventional second-order TVD upwind scheme, various other second-order accurate TVD schemes with lower truncation error, and even a third-order accurate TVD approximation. All the schemes are defined with a five-point grid bandwidth. In this paper, the new algorithms are described for scalar equations, systems, and arbitrary coordinates. Selected numerical results are provided to illustrate the new algorithms and their properties.
Do saccharide doped PAGAT dosimeters increase accuracy?
NASA Astrophysics Data System (ADS)
Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.
2015-01-01
To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.
Empirical Accuracies of U.S. Space Surveillance Network Reentry Predictions
NASA Technical Reports Server (NTRS)
Johnson, Nicholas L.
2008-01-01
The U.S. Space Surveillance Network (SSN) issues formal satellite reentry predictions for objects which have the potential for generating debris which could pose a hazard to people or property on Earth. These prognostications, known as Tracking and Impact Prediction (TIP) messages, are nominally distributed at daily intervals beginning four days prior to the anticipated reentry and several times during the final 24 hours in orbit. The accuracy of these messages depends on the nature of the satellite s orbit, the characteristics of the space vehicle, solar activity, and many other factors. Despite the many influences on the time and the location of reentry, a useful assessment of the accuracies of TIP messages can be derived and compared with the official accuracies included with each TIP message. This paper summarizes the results of a study of numerous uncontrolled reentries of spacecraft and rocket bodies from nearly circular orbits over a span of several years. Insights are provided into the empirical accuracies and utility of SSN TIP messages.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Differential effects of self-monitoring attention, accuracy, and productivity.
Maag, J W; Reid, R; DiGangi, S A
1993-01-01
Effects of self-monitoring on-task behavior, academic productivity, and academic accuracy were assessed with 6 elementary-school students with learning disabilities in their general education classroom using a mathematics task. Following baseline, the three self-monitoring conditions were introduced using a multiple schedule design during independent practice sessions. Although all three interventions yielded improvements in either arithmetic productivity, accuracy, or on-task behavior, self-monitoring academic productivity or accuracy was generally superior. Differential results were obtained across age groups: fourth graders' mathematics performance improved most when self-monitoring productivity, whereas sixth graders' performance improved most when self-monitoring accuracy. PMID:8407682
Thermocouple Calibration and Accuracy in a Materials Testing Laboratory
NASA Technical Reports Server (NTRS)
Lerch, B. A.; Nathal, M. V.; Keller, D. J.
2002-01-01
A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.
On the Accuracy of Genomic Selection
Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte
2016-01-01
Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178
COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE
Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov
2011-08-10
Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the
NASA Astrophysics Data System (ADS)
Bokhove, H.
The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.
Naulleau, P P; Goldberg, K A; Lee, S H; Chang, C; Attwood, D; Bokor, J
1999-12-11
The phase-shifting point-diffraction interferometer (PS/PDI) was recently developed and implemented at Lawrence Berkeley National Laboratory to characterize extreme-ultraviolet (EUV) projection optical systems for lithography. Here we quantitatively characterize the accuracy and precision of the PS/PDI. Experimental measurements are compared with theoretical results. Two major classes of errors affect the accuracy of the interferometer: systematic effects arising from measurement geometry and systematic and random errors due to an imperfect reference wave. To characterize these effects, and hence to calibrate the interferometer, a null test is used. This null test also serves as a measure of the accuracy of the interferometer. We show the EUV PS/PDI, as currently implemented, to have a systematic error-limited reference-wave accuracy of 0.0028 waves (lambda/357 or 0.038 nm at lambda = 13.5 nm) within a numerical aperture of 0.082. PMID:18324274
Estimating Classification Accuracy for Complex Decision Rules Based on Multiple Scores
ERIC Educational Resources Information Center
Douglas, Karen M.; Mislevy, Robert J.
2010-01-01
Important decisions about students are made by combining multiple measures using complex decision rules. Although methods for characterizing the accuracy of decisions based on a single measure have been suggested by numerous researchers, such methods are not useful for estimating the accuracy of decisions based on multiple measures. This study…
Numerical Simulation of Time-Dependent Wave Propagation Using Nonreflective Boundary Conditions
NASA Astrophysics Data System (ADS)
Ionescu, D.; Muehlhaus, H.
2003-12-01
Solving numerically the wave equation for modelling wave propagation on an unbounded domain with complex geometry requires a truncation of the domain, to fit the infinite region on a finite computer. Minimizing the amount of spurious reflections requires in many cases the introduction of an artificial boundary and of associated nonreflecting boundary conditions. Here, a question arises, namely which boundary condition guarantees that the solution of the time dependent problem inside the artificial boundary coincides with the solution of the original problem in the infinite region. Recent investigations have shown that the accuracy and performance of numerical algorithms and the interpretation of the results critically depend on the proper treatment of external boundaries. Despite the computational speed of finite difference schemes and the robustness of finite elements in handling complex geometries the resulting numerical error consists of two independent contributions: the discretization error of the numerical method used and the spurious reflection generated at the artificial boundary. This spurious contribution travels back and substantially degrades the accuracy of the solution everywhere in the computational domain. Unless both error components are reduced systematically, the numerical solution does not converge to the solution of the original problem in the infinite region. In the present study we present and discuss absorbing boundary condition techniques for the time-dependent scalar wave equation in three spatial dimensions. In particular, exact conditions that annihilate wave harmonics on a spherical artificial boundary up to a given order are obtained and subsequently applied in numerical simulations by employing a finite differences implementation.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Positional Accuracy Assessment of Googleearth in Riyadh
NASA Astrophysics Data System (ADS)
Farah, Ashraf; Algarni, Dafer
2014-06-01
Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.
Accuracy control in Monte Carlo radiative calculations
NASA Technical Reports Server (NTRS)
Almazan, P. Planas
1993-01-01
The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.
Accuracy of forecasts in strategic intelligence
Mandel, David R.; Barnes, Alan
2014-01-01
The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176
Accuracy of NHANES periodontal examination protocols.
Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A
2010-11-01
This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research. PMID:20858782
Piezoresistive position microsensors with ppm-accuracy
NASA Astrophysics Data System (ADS)
Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav
2015-05-01
In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.