Accuracy improvement in digital holographic microtomography by multiple numerical reconstructions
NASA Astrophysics Data System (ADS)
Ma, Xichao; Xiao, Wen; Pan, Feng
2016-11-01
In this paper, we describe a method to improve the accuracy in digital holographic microtomography (DHMT) for measurement of thick samples. Two key factors impairing the accuracy, the deficiency of depth of focus and the rotational error, are considered and addressed simultaneously. The hologram is propagated to a series of distances by multiple numerical reconstructions so as to extend the depth of focus. The correction of the rotational error, implemented by numerical refocusing and image realigning, is merged into the computational process. The method is validated by tomographic results of a four-core optical fiber and a large mode optical crystal fiber. A sample as thick as 258 μm is accurately reconstructed and the quantitative three-dimensional distribution of refractive index is demonstrated.
Results from Numerical General Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.
2011-01-01
For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Halo abundance matching: accuracy and conditions for numerical convergence
NASA Astrophysics Data System (ADS)
Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan
2015-03-01
Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.
NASA Astrophysics Data System (ADS)
Bailey, Brian N.
2016-07-01
When Lagrangian stochastic models for turbulent dispersion are applied to complex atmospheric flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behaviour in the numerical solution. Here we discuss numerical strategies for solving the non-linear Langevin-based particle velocity evolution equation that eliminate such unphysical behaviour in both Reynolds-averaged and large-eddy simulation applications. Extremely large or `rogue' particle velocities are caused when the numerical integration scheme becomes unstable. Such instabilities can be eliminated by using a sufficiently small integration timestep, or in cases where the required timestep is unrealistically small, an unconditionally stable implicit integration scheme can be used. When the generalized anisotropic turbulence model is used, it is critical that the input velocity covariance tensor be realizable, otherwise unphysical behaviour can become problematic regardless of the integration scheme or size of the timestep. A method is presented to ensure realizability, and thus eliminate such behaviour. It was also found that the numerical accuracy of the integration scheme determined the degree to which the second law of thermodynamics or `well-mixed condition' was satisfied. Perhaps more importantly, it also determined the degree to which modelled Eulerian particle velocity statistics matched the specified Eulerian distributions (which is the ultimate goal of the numerical solution). It is recommended that future models be verified by not only checking the well-mixed condition, but perhaps more importantly by checking that computed Eulerian statistics match the Eulerian statistics specified as inputs.
Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms
NASA Astrophysics Data System (ADS)
Pürrer, Michael; LVC Collaboration
2016-03-01
We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.
Numerical taxonomy on data: Experimental results
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis
NASA Astrophysics Data System (ADS)
Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani
2010-06-01
The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.
Numerical simulations of catastrophic disruption: Recent results
NASA Astrophysics Data System (ADS)
Benz, W.; Asphaug, E.; Ryan, E. V.
1994-12-01
Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.
Technology Transfer Automated Retrieval System (TEKTRAN)
When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...
Prandtl's Equations: Numerical Results about Singularity Formation and a New Numerical Method
NASA Astrophysics Data System (ADS)
Puppo, Gabriella
1990-01-01
In this work, new numerical results about singularity formation for unsteady Prandtl's equations are presented. Extensive computations with a Lax Wendroff scheme for the impulsively started circular cylinder show that the gradient of the velocity becomes infinite in a finite time. The accuracy and the simplicity of the Lax Wendroff scheme allow us to couple the resolution given by second order accuracy in space with the detail of an extremely fine grid. Thus, while these computations confirm previous results about singularity formation (Van Dommelen and Shen, Cebeci, Wang), they differ in other respects. In fact the peak in the velocity gradient appears to be located upstream of the region of reversed flow and away from the zero vorticity line. Some analytic arguments are also presented to support these conclusions, independently of the computations. In the second part of this work another new numerical method to solve the unsteady Prandtl equations is proposed. This numerical scheme derives from Chorin's Vortex Sheet method. The equations are also solved with operator splitting, but, unlike Chorin's, this scheme is deterministic. This feature is achieved using a Lagrangian particle formulation for the convective step and solving the diffusion step with finite differences on an Eulerian mesh. Finally, a numerical convergence proof is presented.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.
2013-08-01
Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now
Initial weather regimes as predictors of numerical 30-day mean forecast accuracy
NASA Technical Reports Server (NTRS)
Colucci, Stephen J.; Baumhefner, David P.
1992-01-01
Thirty 30-day mean 500-mb-height anomaly forecasts generated by the NCAR Community Climate Model (CCM) for the year 1978 are examined in order to determine if the forecast accuracy can be estimated with the initial conditions. The initial weather regimes were defined in such a way that the regimes could discriminate between the best and the worst 30-day mean forecasts run from the initial fields in this data set. On the basis of the CCM experiments, it is suggested that the accuracy of numerical 30-day mean forecasts may depend upon the accuracy with which the cyclones and their interactions with the planetary scale are predicted early in the forecast cycle, and that this accuracy may depend upon the initial conditions.
Maximizing the accuracy of field-derived numeric nutrient criteria in water quality regulations.
McLaughlin, Douglas B
2014-01-01
High levels of the nutrients nitrogen and phosphorus can cause unhealthy biological or ecological conditions in surface waters and prevent the attainment of their designated uses. Regulatory agencies are developing numeric criteria for these nutrients in an effort to ensure that the surface waters in their jurisdictions remain healthy and productive, and that water quality standards are met. These criteria are often derived using field measurements that relate nutrient concentrations and other water quality conditions to expected biological responses such as undesirable growth or changes in aquatic plant and animal communities. Ideally, these numeric criteria can be used to accurately "diagnose" ecosystem health and guide management decisions. However, the degree to which numeric nutrient criteria are useful for decision making depends on how accurately they reflect the status or risk of nutrient-related biological impairments. Numeric criteria that have little predictive value are not likely to be useful for managing nutrient concerns. This paper presents information on the role of numeric nutrient criteria as biological health indicators, and the potential benefits of sufficiently accurate criteria for nutrient management. In addition, it describes approaches being proposed or adopted in states such as Florida and Maine to improve the accuracy of numeric criteria and criteria-based decisions. This includes a preference for developing site-specific criteria in cases where sufficient data are available, and the use of nutrient concentration and biological response criteria together in a framework to support designated use attainment decisions. Together with systematic planning during criteria development, the accuracy of field-derived numeric nutrient criteria can be assessed and maximized as a part of an overall effort to manage nutrient water quality concerns. PMID:24123826
Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use
ERIC Educational Resources Information Center
Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.
2010-01-01
Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…
On the use of Numerical Weather Models for improving SAR geolocation accuracy
NASA Astrophysics Data System (ADS)
Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.
2013-12-01
Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of
A two-zone method with an enhanced accuracy for a numerical solution of the diffusion equation
NASA Astrophysics Data System (ADS)
Cheon, Jin-Sik; Koo, Yang-Hyun; Lee, Byung-Ho; Oh, Je-Yong; Sohn, Dong-Seong
2006-12-01
A variational principle is applied to the diffusion equation to numerically obtain the fission gas release from a spherical grain. The two-zone method, originally proposed by Matthews and Wood, is modified to overcome its insufficient accuracy for a low release. The results of the variational approaches are examined by observing the gas concentration along the grain radius. At the early stage, the concentration near the grain boundary is higher than that at the inner points of the grain in the cases of the two-zone method as well as the finite element analysis with the number of the elements at as many as 10. The accuracy of the two-zone method is considerably enhanced by relocating the nodal points of the two zones. The trial functions are derived as a function of the released fraction. During the calculations, the number of degrees of freedom needs to be reduced to guarantee physically admissible concentration profiles. Numerical verifications are performed extensively. By taking a computational time comparable to the algorithm by Forsberg and Massih, the present method provides a solution with reasonable accuracy in the whole range of the released fraction.
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.; Galis, M.; Chaljub, E.; Chen, X.; Zhang, Z.
2012-04-01
Numerical modeling of earthquake ground motion in sedimentary basins and valleys often has to account for the P-wave to S-wave speed ratios (VP/VS) as large as five and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 - the unconsolidated lake sediments in Ciudad de México are a good example. At the same time, accuracy of the numerical schemes with respect to VP/VS has not been sufficiently analyzed. The numerical schemes are often applied without adequate check of the accuracy. We present theoretical analysis and numerical comparison of 18 3D numerical time-domain explicit schemes for modeling seismic motion for their accuracy with the varying VP/VS. The schemes are based on the finite-difference, spectral-element, finite-element and discontinuous-Galerkin methods. All schemes are presented in a unified form. Theoretical analysis compares accuracy of the schemes in terms of local errors in amplitude and vector difference. In addition to the analysis we compare numerically simulated seismograms with exact solutions for canonical configurations. We compare accuracy of the schemes in terms of the local errors, grid dispersion and full wavefield simulations with respect to the structure of the numerical schemes.
NASA Astrophysics Data System (ADS)
Cannon, Kipp; Emberson, J. D.; Hanna, Chad; Keppel, Drew; Pfeiffer, Harald P.
2013-02-01
Matched filtering for the identification of compact object mergers in gravitational wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms. Typically the template bank is constructed from phenomenological waveform models, since these can be evaluated for an arbitrary choice of physical parameters. Recently it has been proposed that singular value decomposition (SVD) can be used to reduce the number of templates required for detection. As we show here, another benefit of SVD is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity (NR) simulations. Using these ideas, we present a method that calibrates a reduced SVD basis of phenomenological waveforms against NR waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model. The new waveform family is given numerically through the interpolation of the projection coefficients of NR waveforms expanded onto the reduced basis and provides a generalized scheme for enhancing phenomenological models.
Propagation of MHD disturbance in numerical modelling: Accuracy issues and condition
NASA Astrophysics Data System (ADS)
Kim, Kyung-Im; Lee, Dong-Hun; Jang, Jae-Jin; Kim, Jung-Hoon; Kim, Jaehun
2016-07-01
In space weather studies, MHD numerical models are often used to study time-dependent simulations over relatively long time period and large size space, which include many examples from the solar origin to the Earth impact in the heliosphere. There have been rising questions on whether many different numerical codes are consistent with each other and how we can confirm the validity of simulation results for a given event. In this study, we firstly introduce a class of exact analytic solutions of MHD when the boundary is driven by certain impulsive impacts. Secondly we test and compare MHD numerical models with the exact full MHD solution above to check whether the simulations are sufficiently accurate. Our results show 1) that numerical errors are very significant in the problems of MHD disturbance propagation in the interplanetary space, 2) that typical spatial and temporal resolutions, which are widely used in numerical modelling, are found to easily produce more than a few hours up to 10 hours in arrival timing at the near-Earth space, and 3) how we can avoid serious errors by optimizing the model parameters in advance via studying with an exact solution.
Improved Accuracy of the Gravity Probe B Science Results
NASA Astrophysics Data System (ADS)
Conklin, John; Adams, M.; Aljadaan, A.; Aljibreen, H.; Almeshari, M.; Alsuwaidan, B.; Bencze, W.; Buchman, S.; Clarke, B.; Debra, D. B.; Everitt, C. W. F.; Heifetz, M.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Worden, P. W., Jr.
This paper presents the progress in the science data analysis for the Gravity Probe B (GP-B) experiment. GP-B, sponsored by NASA and launched in April of 2004, tests two fundamental predictions of general relativity, the geodetic effect and the frame-dragging effect. The GP-B spacecraft measures the non-Newtonian drift rates of four ultra-precise cryogenic gyroscopes placed in a circular polar Low Earth Orbit. Science data was collected from 28 August 2004 until cryogen depletion on 29 September 2005. The data analysis is complicated by two unexpected phenomena, a) a continually damping gyroscope polhode affecting the calibration of the gyro readout scale factor, and b) two larger than expected classes of Newtonian torque acting on the gyroscopes. Experimental evidence strongly suggests that both effects are caused by non-uniform electric potentials (i.e. the patch effect) on the surfaces of the gyroscope rotor and its housing. At the end of 2008, the data analysis team reported intermediate results showing that the two complications are well understood and are separable from the relativity signal. Since then we have developed the final GP-B data analysis code, the "2-second Filter", which provides the most accurate and precise determination of the non-Newtonian drifts attainable in the presence of the two Newtonian torques and the fundamental instrument noise. This limit is roughly 5
NASA Astrophysics Data System (ADS)
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
Feller, David; Peterson, Kirk A
2007-03-21
Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (+/-1 kcal/mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (SigmaD0=166.0+/-0.3 kcal/mol) and the most recent Active Thermochemical Table (ATcT) value (165.97+/-0.06 kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7+/-0.3 kcal/mol) almost encompassing the most recent ATcT value (144.03+/-0.13 kcal/mol). For a larger collection of 68 molecules, a mean absolute deviation of 0.3 kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of +/-0.001 A. PMID:17381194
NASA Astrophysics Data System (ADS)
Feller, David; Peterson, Kirk A.
2007-03-01
Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (±1kcal /mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (ΣD0=166.0±0.3kcal /mol) and the most recent Active Thermochemical Table (ATcT) value (165.97±0.06kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7±0.3kcal/mol) almost encompassing the most recent ATcT value (144.03±0.13kcal/mol). For a larger collection of 68molecules, a mean absolute deviation of 0.3kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of ±0.001Å.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
NASA Astrophysics Data System (ADS)
Shimose, Ken-ichi; Ohtake, Hideaki; Fonseca, Joao Gari da Silva; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori
2014-10-01
The impact of aerosols on the forecast accuracy of solar irradiance calculated by a fine-scale, one day-ahead, and operational numerical weather prediction model (NWP) is investigated in this study. In order to investigate the impact of aerosols only, the clear sky period is chosen, which is defined as when there are no clouds in the observation data and in the forecast data at the same time. The evaluation of the forecast accuracy of the solar irradiance is done at a single observation point that is sometimes affected by aerosol events. The analysis period is one year from April 2010 to March 2011. During the clear sky period, the root mean square errors (RMSE) of the global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance (DHI) are 40.0 W m-2, 84.0 Wm-2, and 47.9 W m-2, respectively. During one extreme event, the RMSEs of the GHI, DNI, and DHI are 70.1 W m-2, 211.6 W m-2, and 141.7 W m-2, respectively. It is revealed that the extreme events were caused by aerosols such as dust or haze. In order to investigate the impact of the aerosols, the sensitivity experiments of the aerosol optical depth (AOD) for the extreme events are executed. The best result is obtained by changing the AOD to 2.5 times the original AOD. This changed AOD is consistent with the satellite observation. Thus, it is our conclusion that an accurate aerosol forecast is important for the forecast accuracy of the solar irradiance.
NASA Astrophysics Data System (ADS)
Ueyama, Yuki; Miyashita, Eizo
2011-06-01
We have pair muscle groups on a joint; agonist and antagonist muscles. Simultaneous activation of agonist and antagonist muscles around a joint, which is called cocontraction, is suggested to take a role of increasing the joint stiffness in order to decelerate hand speed and improve movement accuracy. However, it has not been clear how cocontraction and the joint stiffness are varied during movements. In this study, muscle activation and the joint stiffness in reaching movements were studied under several requirements of end-point accuracy using a 2-joint 6-muscle model and an approximately optimal control. The time-varying cocontraction and the joint stiffness were showed by the numerically simulation study. It indicated that the strength of cocontraction and the joint stiffness increased synchronously as the required accuracy level increased. We conclude that cocontraction may get the joint stiffness increased to achieve higher requirement of the movement accuracy.
Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results
Di Persio, Luca; Marchesan, Michele
2014-01-01
In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest. PMID:27437500
Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results.
Di Persio, Luca; Marchesan, Michele
2014-01-01
In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest.
Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.
2013-02-01
In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.
Hill, M.C.
1989-01-01
Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
NASA Astrophysics Data System (ADS)
Furuichi, M.; Kameyama, M.; Kageyama, A.
2007-12-01
Reproducing a realistic plate tectonics with mantle convection simulation is one of the greatest challenges in computational geophysics. We have developed a three dimensional Eulerian numerical procedure toward plate-mantle simulation, which includes a finite deformation of the plate in the mantle convection. Our method, combined with CIP-CSLR (Constrained Interpolation Profile method-Conservative Semi-Lagrangian advection scheme with Rational function) and ACuTE method, enables us to solve advection and force balance equations even with a large and sharp viscosity jump, which marks the interface between the plates and surrounding upper mantle materials. One of the typical phenomena represented by our method is a fluid rope coiling event, where a stream of viscous fluid is poured onto the bottom plane from a certain height. This coiling motion is due to delicate balances between bending, twisting and stretching motions of fluid rope. In the framework of the Eulerian scheme, the fluid rope and surrounding air are treated as a viscosity profile which differs by several orders of magnitude. Our method solves the complex force balances of the fluid rope and air, by a multigrid iteration technique of ACuTE algorithm. In addition, the CIP-CSLR advection scheme allows us to obtain a deforming shape of the fluid rope, as a low diffusive solution in the Eulerian frame of reference. In this presentation, we will show the simulation result of the fluid rope coiling as an accuracy test for our simulation scheme, by comparing with the simplified numerical solution for thin viscous jet.
NASA Technical Reports Server (NTRS)
Radhadrishnan, Krishnan
1993-01-01
A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.
Comparisons between physical model and numerical model results
Sagasta, P.F.
1986-04-01
Physical modeling scaling laws provide the opportunity to compare results among numerical modeling programs, including two- and three-dimensional interactive-raytracing and more sophisticated wave-equation-approximation methods, and seismic data collected over a known, three-dimensional model in a water tank. The sixfold closely spaced common-midpoint water-tank data modeled for this study simulate a standard marine three-dimensional survey shot over a three-layered physical model (a structured upper layer overlying two flat layers. Using modeling theory, the physical-tank model dimensions scale to realistic exploration dimensions, and the ultrasonic frequencies scale to seismic frequencies of 2-60 Hz. A comparison of P and converted-S events and amplitudes among these physical tank data and numerical modeling results illustrates many of the advantages and limitations of modeling methods available to the exploration geophysicist. The ability of three-dimensional raytracing to model off-line events and more closely predict waveform phase due to geometric effects shows the greater usefulness of three-dimensional modeling methods over two-dimensional methods in seismic interpretation. Forward modeling of P to Sv-converted events and multiples predicts their presence in the seismic data. The geometry of the physical model leads to examples where raytracing approximations are limited and the more time-consuming finite-element technique is useful to better understand wave propagation within the physical model. All of the numerical modeling programs used show limitations in matching the amplitudes and phase of events in the physical-model seismic data.
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
NASA Astrophysics Data System (ADS)
Taylor, Charles R.; Dolloff, John T.; Lofy, Brian A.; Luker, Steve A.
2003-08-01
BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.
Accuracy of Student Recall of Strong Interest Inventory Results 1 Year after Interpretation.
ERIC Educational Resources Information Center
Hansen, Jo-Ida C.; And Others
1994-01-01
Examined how accurately college students (n=87) recalled information from their Strong Interest Inventory (SII) profiles one year later. Significant number of participants recalled at least one profile result, but accuracy of recall varied by type of scale and percentage of participants who first remembered something and then remembered it…
Cullum, J.
1994-12-31
Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.
Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments
NASA Astrophysics Data System (ADS)
Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang
2016-06-01
Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.
The effect of accuracy, conservation and filtering on numerical weather forecasting
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.; Hoitsma, D.
1979-01-01
Considerations leading to the numerical design of the GLAS fourth-order global atmospheric model are discussed, including changes recently introduced into the model. The computation time and memory requirements for the fourth-order model are similar to those of the present second-order GLAS model with the same 4 deg latitude, 5 deg longitude, and 9 vertical-level resolution. However, the fourth-order model forecast skill is significantly better than that of the current GLAS model, and after three days it is comparable to the 2.5 by 3 deg version of the GLAS model in the sea level pressure maps, and has less phase errors in the 500 mb maps.
NASA Astrophysics Data System (ADS)
Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.
2016-08-01
This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2014-12-01
Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major
Numerical Results of 3-D Modeling of Moon Accumulation
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr
2014-05-01
For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity
Busted Butte: Achieving the Objectives and Numerical Modeling Results
W.E. Soll; M. Kearney; P. Stauffer; P. Tseng; H.J. Turin; Z. Lu
2002-10-07
The Unsaturated Zone Transport Test (UZTT) at Busted Butte is a mesoscale field/laboratory/modeling investigation designed to address uncertainties associated with flow and transport in the UZ site-process models for Yucca Mountain. The UZTT test facility is located approximately 8 km southeast of the potential Yucca Mountain repository area. The UZTT was designed in two phases, to address five specific objectives in the UZ: the effect of heterogeneities, flow and transport (F&T) behavior at permeability contrast boundaries, migration of colloids , transport models of sorbing tracers, and scaling issues in moving from laboratory scale to field scale. Phase 1A was designed to assess the influence of permeability contrast boundaries in the hydrologic Calico Hills. Visualization of fluorescein movement , mineback rock analyses, and comparison with numerical models demonstrated that F&T are capillary dominated with permeability contrast boundaries distorting the capillary flow. Phase 1B was designed to assess the influence of fractures on F&T and colloid movement. The injector in Phase 1B was located at a fracture, while the collector, 30 cm below, was placed at what was assumed to be the same fracture. Numerical simulations of nonreactive (Br) and reactive (Li) tracers show the experimental data are best explained by a combination of molecular diffusion and advective flux. For Phase 2, a numerical model with homogeneous unit descriptions was able to qualitatively capture the general characteristics of the system. Numerical simulations and field observations revealed a capillary dominated flow field. Although the tracers showed heterogeneity in the test block, simulation using heterogeneous fields did not significantly improve the data fit over homogeneous field simulations. In terms of scaling, simulations of field tracer data indicate a hydraulic conductivity two orders of magnitude higher than measured in the laboratory. Simulations of Li, a weakly sorbing tracer
Results of a remote multiplexer/digitizer unit accuracy and environmental study
NASA Technical Reports Server (NTRS)
Wilner, D. O.
1977-01-01
A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.
Improving the trust in results of numerical simulations and scientific data analytics
Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan
2015-04-30
This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general
Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results
NASA Astrophysics Data System (ADS)
Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.
2013-12-01
The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.
NASA Astrophysics Data System (ADS)
Wang, Shi-tai; Peng, Jun-huan
2015-12-01
The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.
Numerical Results of Earth's Core Accumulation 3-D Modelling
NASA Astrophysics Data System (ADS)
Khachay, Yurie; Anfilogov, Vsevolod
2013-04-01
For a long time as a most convenient had been the model of mega impact in which the early forming of the Earth's core and mantle had been the consequence of formed protoplanet collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,3] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone. Only after the increasing of the gravitational radius, the growing area of the future core can save also the silicate envelope fragments. All existing dynamical accumulation models are constructed by using a spherical-symmetrical model. Hence for understanding the further planet evolution it is significant to trace the origin and evolution of heterogeneities, which occur on the planet accumulation stage. In that paper we are modeling distributions of temperature, pressure, velocity of matter flowing in a block of 3D- spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach. The numerical algorithm of the problem solution in
Numerical calculations of high-altitude differential charging: Preliminary results
NASA Technical Reports Server (NTRS)
Laframboise, J. G.; Godard, R.; Prokopenko, S. M. L.
1979-01-01
A two dimensional simulation program was constructed in order to obtain theoretical predictions of floating potential distributions on geostationary spacecraft. The geometry was infinite-cylindrical with angle dependence. Effects of finite spacecraft length on sheath potential profiles can be included in an approximate way. The program can treat either steady-state conditions or slowly time-varying situations, involving external time scales much larger than particle transit times. Approximate, locally dependent expressions were used to provide space charge, density profiles, but numerical orbit-following is used to calculate surface currents. Ambient velocity distributions were assumed to be isotropic, beam-like, or some superposition of these.
Numerical computation of the effective-one-body potential q using self-force results
NASA Astrophysics Data System (ADS)
Akcay, Sarp; van de Meent, Maarten
2016-03-01
The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two nonspinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: a (v ) , d ¯ (v ) , q (v ) . By generalizing the first law of mechanics for (nonspinning) black hole binaries to eccentric orbits, [A. Le Tiec, Phys. Rev. D 92, 084021 (2015).] recently obtained new expressions for d ¯(v ) and q (v ) in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential q (v ) by combining results from two independent numerical self-force codes. We determine q (v ) for inverse binary separations in the range 1 /1200 ≤v ≲1 /6 . Our computation thus provides the first-ever strong-field results for q (v ) . We also obtain d ¯ (v ) in our entire domain to a fractional accuracy of ≳10-8 . We find that our results are compatible with the known post-Newtonian expansions for d ¯(v ) and q (v ) in the weak field, and agree with previous (less accurate) numerical results for d ¯(v ) in the strong field.
Non-Shock Initiation Model for Explosive Families: Numerical Results
NASA Astrophysics Data System (ADS)
Todd, S. N.; Anderson, M. U.; Caipen, T. L.; Grady, D. E.
2009-12-01
A damage initiated reaction (DMGIR) computational model is being developed for the CTH shock physics code to predict the response of an explosive to non-shock mechanical insults. The distinguishing feature of this model is the introduction of a damage variable, which relates the evolution of damage to the initiation of reaction in the explosive, and its growth to detonation. The DMGIR model is a complement to the History Variable Reactive Burn (HVRB) model embedded in the current CTH code. Specifically designed experiments are supporting the development, implementation, and validation of the DMGIR numerical approach. PBXN-5 was the initial explosive material used experimentally to develop the DMGIR model. This explosive represents a family of plastically bonded explosives with good mechanical strength and rigid body properties. The model has been extended to cast explosives represented by Composition B.
Spurious frequencies as a result of numerical boundary treatments
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David
1990-01-01
The stability theory for finite difference Initial Boundary-Value approximations to systems of hyperbolic partial differential equations states that the exclusion of eigenvalues and generalized eigenvalues is a sufficient condition for stability. The theory, however, does not discuss the nature of numerical approximations in the presence of such eigenvalues. In fact, as was shown previously, for the problem of vortex shedding by a 2-D cylinder in subsonic flow, stating boundary conditions in terms of the primitive (non-characteristic) variables may lead to such eigenvalues, causing perturbations that decay slowly in space and remain periodic time. Characteristic formulation of the boundary conditions avoided this problem. A more systematic study of the behavior of the (linearized) one-dimensional gas dynamic equations under various sets of oscillation-inducing legal boundary conditions is reported.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-06-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
NASA Astrophysics Data System (ADS)
Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian
2016-08-01
Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.
Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J
2015-06-15
Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.
Accuracy of relative positioning by interferometry with GPS Double-blind test results
NASA Technical Reports Server (NTRS)
Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.
1983-01-01
MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej
2010-03-09
the theoretical and measurement results for all cases considered has verified the validity and accuracy of our numerical model. Quantitative analysis of the obtained results enabled to find how the ultrasound-induced temperature rises in the rat liver could be controlled by adjusting the source parameters and exposure time.
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
NASA Astrophysics Data System (ADS)
Kujawska, Tamara; Wójcik, Janusz; Nowicki, Andrzej
2010-03-01
theoretical and measurement results for all cases considered has verified the validity and accuracy of our numerical model. Quantitative analysis of the obtained results enabled to find how the ultrasound-induced temperature rises in the rat liver could be controlled by adjusting the source parameters and exposure time.
Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.
Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa
2015-09-01
Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children.
Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.
Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa
2015-09-01
Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. PMID:26069219
Croyle, Robert T; Loftus, Elizabeth F; Barger, Steven D; Sun, Yi-Chun; Hart, Marybeth; Gettig, JoAnn
2006-05-01
The authors conducted a community-based cholesterol screening study to examine accuracy of recall for self-relevant health information in long-term autobiographical memory. Adult community residents (N = 496) were recruited to participate in a laboratory-based cholesterol screening and were also provided cholesterol counseling in accordance with national guidelines. Participants were subsequently interviewed 1, 3, or 6 months later to assess their memory for their test results. Participants recalled their exact cholesterol levels inaccurately (38.0% correct) but their cardiovascular risk category comparatively well (88.7% correct). Recall errors showed a systematic bias: Individuals who received the most undesirable test results were most likely to remember their cholesterol scores and cardiovascular risk categories as lower (i.e., healthier) than those actually received. Recall bias was unrelated to age, education, knowledge, self-rated health status, and self-reported efforts to reduce cholesterol. The findings provide evidence that recall of self-relevant health information is susceptible to self-enhancement bias.
Oussalah, Abderrahim; Ferrand, Janina; Filhine-Tresarrieu, Pierre; Aissa, Nejla; Aimone-Gastin, Isabelle; Namour, Fares; Garcia, Matthieu; Lozniewski, Alain; Guéant, Jean-Louis
2015-01-01
Abstract Previous studies have suggested that procalcitonin is a reliable marker for predicting bacteremia. However, these studies have had relatively small sample sizes or focused on a single clinical entity. The primary endpoint of this study was to investigate the diagnostic accuracy of procalcitonin for predicting or excluding clinically relevant pathogen categories in patients with suspected bloodstream infections. The secondary endpoint was to look for organisms significantly associated with internationally validated procalcitonin intervals. We performed a cross-sectional study that included 35,343 consecutive patients who underwent concomitant procalcitonin assays and blood cultures for suspected bloodstream infections. Biochemical and microbiological data were systematically collected in an electronic database and extracted for purposes of this study. Depending on blood culture results, patients were classified into 1 of the 5 following groups: negative blood culture, Gram-positive bacteremia, Gram-negative bacteremia, fungi, and potential contaminants found in blood cultures (PCBCs). The highest procalcitonin concentration was observed in patients with blood cultures growing Gram-negative bacteria (median 2.2 ng/mL [IQR 0.6–12.2]), and the lowest procalcitonin concentration was observed in patients with negative blood cultures (median 0.3 ng/mL [IQR 0.1–1.1]). With optimal thresholds ranging from ≤0.4 to ≤0.75 ng/mL, procalcitonin had a high diagnostic accuracy for excluding all pathogen categories with the following negative predictive values: Gram-negative bacteria (98.9%) (including enterobacteria [99.2%], nonfermenting Gram-negative bacilli [99.7%], and anaerobic bacteria [99.9%]), Gram-positive bacteria (98.4%), and fungi (99.6%). A procalcitonin concentration ≥10 ng/mL was associated with a high risk of Gram-negative (odds ratio 5.98; 95% CI, 5.20–6.88) or Gram-positive (odds ratio 3.64; 95% CI, 3.11–4.26) bacteremia but
Castro, A. P. G.; Paul, C. P. L.; Detiger, S. E. L.; Smit, T. H.; van Royen, B. J.; Pimenta Claro, J. C.; Mullender, M. G.; Alves, J. L.
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
Castro, A P G; Paul, C P L; Detiger, S E L; Smit, T H; van Royen, B J; Pimenta Claro, J C; Mullender, M G; Alves, J L
2014-01-01
The loaded disk culture system is an intervertebral disk (IVD)-oriented bioreactor developed by the VU Medical Center (VUmc, Amsterdam, The Netherlands), which has the capacity of maintaining up to 12 IVDs in culture, for approximately 3 weeks after extraction. Using this system, eight goat IVDs were provided with the essential nutrients and submitted to compression tests without losing their biomechanical and physiological properties, for 22 days. Based on previous reports (Paul et al., 2012, 2013; Detiger et al., 2013), four of these IVDs were kept in physiological condition (control) and the other four were previously injected with chondroitinase ABC (CABC), in order to promote degenerative disk disease (DDD). The loading profile intercalated 16 h of activity loading with 8 h of loading recovery to express the standard circadian variations. The displacement behavior of these eight IVDs along the first 2 days of the experiment was numerically reproduced, using an IVD osmo-poro-hyper-viscoelastic and fiber-reinforced finite element (FE) model. The simulations were run on a custom FE solver (Castro et al., 2014). The analysis of the experimental results allowed concluding that the effect of the CABC injection was only significant in two of the four IVDs. The four control IVDs showed no signs of degeneration, as expected. In what concerns to the numerical simulations, the IVD FE model was able to reproduce the generic behavior of the two groups of goat IVDs (control and injected). However, some discrepancies were still noticed on the comparison between the injected IVDs and the numerical simulations, namely on the recovery periods. This may be justified by the complexity of the pathways for DDD, associated with the multiplicity of physiological responses to each direct or indirect stimulus. Nevertheless, one could conclude that ligaments, muscles, and IVD covering membranes could be added to the FE model, in order to improve its accuracy and properly
Sediment Pathways Across Trench Slopes: Results From Numerical Modeling
NASA Astrophysics Data System (ADS)
Cormier, M. H.; Seeber, L.; McHugh, C. M.; Fujiwara, T.; Kanamatsu, T.; King, J. W.
2015-12-01
Until the 2011 Mw9.0 Tohoku earthquake, the role of earthquakes as agents of sediment dispersal and deposition at erosional trenches was largely under-appreciated. A series of cruises carried out after the 2011 event has revealed a variety of unsuspected sediment transport mechanisms, such as tsunami-triggered sheet turbidites, suggesting that great earthquakes may in fact be important agents for dispersing sediments across trench slopes. To complement these observational data, we have modeled the pathways of sediments across the trench slope based on bathymetric grids. Our approach assumes that transport direction is controlled by slope azimuth only, and ignores obstacles smaller than 0.6-1 km; these constraints are meant to approximate the behavior of turbidites. Results indicate that (1) most pathways issued from the upper slope terminate near the top of the small frontal wedge, and thus do not reach the trench axis; (2) in turn, sediments transported to the trench axis are likely derived from the small frontal wedge or from the subducting Pacific plate. These results are consistent with the stratigraphy imaged in seismic profiles, which reveals that the slope apron does not extend as far as the frontal wedge, and that the thickness of sediments at the trench axis is similar to that of the incoming Pacific plate. We further applied this modeling technique to the Cascadia, Nankai, Middle-America, and Sumatra trenches. Where well-defined canyons carve the trench slopes, sediments from the upper slope may routinely reach the trench axis (e.g., off Costa Rica and Cascadia). In turn, slope basins that are isolated from the canyons drainage systems must mainly accumulate locally-derived sediments. Therefore, their turbiditic infill may be diagnostic of seismic activity only - and not from storm or flood activity. If correct, this would make isolated slope basins ideal targets for paleoseismological investigation.
Numerical simulation results in the Carthage Cotton Valley field
Meehan, D.N.; Pennington, B.F.
1982-01-01
By coordinating three-dimensional reservoir simulations with pressure-transient tests, core analyses, open-hole and production logs, evaluations of tracer data during hydraulic fracturing, and geologic mapping, Champlin Petroleum obtained better predictions of the reserves and the long-term deliverability of the very tight (less than 0.1-md) Cotton Valley gas reservoir in east Texas. The simulation model that was developed proved capable of optimizing the well spacing and the fracture length. The final history match with the simulator indicated that the formation permeability of the very tight producing zones is substantially lower than suggested by conventional core analysis, 640-acre well spacing will not drain this reservoir efficiently in a reasonable time, and reserves are higher than presimulation estimates. Other results showed that even very long-term pressure buildups in this multilayer reservoir may not reach the straight line required in the conventional Horner pressure-transient analysis, type curves reflecting finite fracture flow capacity can be very useful, and pressure-drawdown analyses from well flow rates and flowing tubing pressure can provide good initial estimates of reservoir and fracture properties for detailed reservoir simulation without requiring expensive, long-term shut-ins of the well.
NASA Astrophysics Data System (ADS)
Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick
2016-06-01
We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.
Selle, L.; Ferret, B.; Poinsot, T.
2011-01-15
Measuring the velocities of premixed laminar flames with precision remains a controversial issue in the combustion community. This paper studies the accuracy of such measurements in two-dimensional slot burners and shows that while methane/air flame speeds can be measured with reasonable accuracy, the method may lack precision for other mixtures such as hydrogen/air. Curvature at the flame tip, strain on the flame sides and local quenching at the flame base can modify local flame speeds and require corrections which are studied using two-dimensional DNS. Numerical simulations also provide stretch, displacement and consumption flame speeds along the flame front. For methane/air flames, DNS show that the local stretch remains small so that the local consumption speed is very close to the unstretched premixed flame speed. The only correction needed to correctly predict flame speeds in this case is due to the finite aspect ratio of the slot used to inject the premixed gases which induces a flow acceleration in the measurement region (this correction can be evaluated from velocity measurement in the slot section or from an analytical solution). The method is applied to methane/air flames with and without water addition and results are compared to experimental data found in the literature. The paper then discusses the limitations of the slot-burner method to measure flame speeds for other mixtures and shows that it is not well adapted to mixtures with a Lewis number far from unity, such as hydrogen/air flames. (author)
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. PMID:27352918
Stacey, Peter; Revell, Graham; Tylee, Barry
2002-11-01
Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced
Stacey, Peter; Revell, Graham; Tylee, Barry
2002-11-01
Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced
Kreuzmair, Christina; Siegrist, Michael; Keller, Carmen
2016-08-01
In two experiments, we investigated the influence of numeracy on individuals' information processing of pictographs depending on numeracy via an eye-tracker. In two conditions, participants from the general population were presented with a scenario depicting the risk of having cancer and were asked to indicate their perceived risk. The risk level was high (63%) in experiment 1 (N = 70) and low (6%) in experiment 2 (N = 69). In the default condition, participants were free to use their default strategy for information processing. In the guiding-toward-the-number condition, they were prompted to count icons in the pictograph by answering with an explicit number. We used eye-tracking parameters related to the distance between sequential fixations to analyze participants' strategies for processing numerical information. In the default condition, the higher the numeracy was, the shorter the distances traversed in the pictograph were, indicating that participants counted the icons. People lower in numeracy performed increased large-area processing by comparing highlighted and nonhighlighted parts of the pictograph. In the guiding-toward-the-number condition, participants used short distances regardless of their numeracy, supporting the notion that short distances represent counting. Despite the different default processing strategies, participants processed the pictograph with a similar depth and derived similar risk perceptions. The results show that pictographs are beneficial for communicating medical risk. Pictographs make the gist salient by making the part-to-whole relationship visually available, and they facilitate low numerates' non-numeric processing of numerical information. Contemporaneously, pictographs allow high numerates to numerically process and rely on the number depicted in the pictograph.
NASA Astrophysics Data System (ADS)
Sprenger, Lisa; Lange, Adrian; Odenbach, Stefan
2014-02-01
Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: ST‖ = -0.152 K-1 and ST⊥ = -0.257 K-1 at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ‖ and ξ⊥ used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can be determined to ξ‖ = {2.8; 9.1; 11.2}
NASA Astrophysics Data System (ADS)
Motheau, E.; Abraham, J.
2016-05-01
A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.
Numerical prediction of freezing fronts in cryosurgery: comparison with experimental results.
Fortin, André; Belhamadia, Youssef
2005-08-01
Recent developments in scientific computing now allow to consider realistic applications of numerical modelling to medicine. In this work, a numerical method is presented for the simulation of phase change occurring in cryosurgery applications. The ultimate goal of these simulations is to accurately predict the freezing front position and the thermal history inside the ice ball which is essential to determine if cancerous cells have been completely destroyed. A semi-phase field formulation including blood flow considerations is employed for the simulations. Numerical results are enhanced by the introduction of an anisotropic remeshing strategy. The numerical procedure is validated by comparing the predictions of the model with experimental results. PMID:16298846
NASA Astrophysics Data System (ADS)
Venugopal, R.; Barash, M. M.; Liu, C. R.
1985-10-01
Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.
Scholl, M.A.
2000-01-01
Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence
NASA Technical Reports Server (NTRS)
Smutek, C.; Bontoux, P.; Roux, B.; Schiroky, G. H.; Hurford, A. C.
1985-01-01
The results of a three-dimensional numerical simulation of Boussinesq free convection in a horizontal differentially heated cylinder are presented. The computation was based on a Samarskii-Andreyev scheme (described by Leong, 1981) and a false-transient advancement in time, with vorticity, velocity, and temperature as dependent variables. Solutions for velocity and temperature distributions were obtained for Rayleigh numbers (based on the radius) Ra = 74-18,700, thus covering the core- and boundary-layer-driven regimes. Numerical solutions are compared with asymptotic analytical solutions and experimental data. The numerical results well represent the complex three-dimensional flows found experimentally.
Manzini, Gianmarco; Cangiani, Andrea; Sutton, Oliver
2014-10-02
This document presents the results of a set of preliminary numerical experiments using several possible conforming virtual element approximations of the convection-reaction-diffusion equation with variable coefficients.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2014-02-15
Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: S{sub T‖} = −0.152 K{sup −1} and S{sub T⊥} = −0.257 K{sup −1} at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ{sub ‖} and ξ{sub ⊥} used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can
A numerically efficient finite element hydroelastic analysis. Volume 1: Theory and results
NASA Technical Reports Server (NTRS)
Coppolino, R. N.
1976-01-01
Symmetric finite element matrix formulations for compressible and incompressible hydroelasticity are developed on the basis of Toupin's complementary formulation of classical mechanics. Results of implementation of the new technique in the NASTRAN structural analysis program are presented which demonstrate accuracy and efficiency.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results
NASA Astrophysics Data System (ADS)
Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris
2016-02-01
In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.
Gravity Probe B data analysis status and potential for improved accuracy of scientific results
NASA Astrophysics Data System (ADS)
Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J.; DeBra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Turneaure, J. P.; Worden, P. W., Jr.
2008-06-01
Gravity Probe B (GP-B) is a landmark physics experiment in space designed to yield precise tests of two fundamental predictions of Einstein's theory of general relativity, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Launched on 20 April 2004, data collection began on 28 August 2004 and science operations were completed on 29 September 2005 upon liquid helium depletion. During the course of the experiment, two unexpected and mutually-reinforcing complications were discovered: (1) larger than expected 'misalignment' torques on the gyroscopes producing classical drifts larger than the relativity effects under study and (2) a damped polhode oscillation that complicated the calibration of the instrument's scale factor against the aberration of starlight. Steady progress through 2006 and 2007 established the methods for treating both problems; in particular, an extended effort from January 2007 on 'trapped flux mapping' led in August 2007 to a dramatic breakthrough, resulting in a factor of ~20 reduction in data scatter. This paper reports results up to November 2007. Detailed investigation of a central 85-day segment of the data has yielded robust measurements of both relativity effects. Expansion to the complete science data set, along with anticipated improvements in modeling and in the treatment of systematic errors may be expected to yield a 3 6% determination of the frame-dragging effect.
Analysis of the Accuracy of Weight Loss Information Search Engine Results on the Internet
Shokar, Navkiran K.; Peñaranda, Eribeth; Nguyen, Norma
2014-01-01
Objectives. We systematically identified and evaluated the quality and comprehensiveness of online information related to weight loss that users were likely to access. Methods. We evaluated the content quality, accessibility of the information, and author credentials for Web sites in 2012 that were identified from weight loss specific queries that we generated. We scored the content with respect to available evidence-based guidelines for weight loss. Results. One hundred three Web sites met our eligibility criteria (21 commercial, 52 news/media, 7 blogs, 14 medical, government, or university, and 9 unclassified sites). The mean content quality score was 3.75 (range = 0–16; SD = 2.48). Approximately 5% (4.85%) of the sites scored greater than 8 (of 12) on nutrition, physical activity, and behavior. Content quality score varied significantly by type of Web site; the medical, government, or university sites (mean = 4.82, SD = 2.27) and blogs (mean = 6.33, SD = 1.99) had the highest scores. Commercial (mean = 2.37, SD = 2.60) or news/media sites (mean = 3.52, SD = 2.31) had the lowest scores (analysis of variance P < .005). Conclusions. The weight loss information that people were likely to access online was often of substandard quality because most comprehensive and quality Web sites ranked too low in search results. PMID:25122030
Gravity Probe B Data Analysis. Status and Potential for Improved Accuracy of Scientific Results
NASA Astrophysics Data System (ADS)
Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J. W.; Debra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Mester, J. C.; Muhlfelder, B.; Ohshima, Y.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Wang, S.; Worden, P. W.
2009-12-01
This is the first of five connected papers detailing progress on the Gravity Probe B (GP-B) Relativity Mission. GP-B, launched 20 April 2004, is a landmark physics experiment in space to test two fundamental predictions of Einstein’s general relativity theory, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Data collection began 28 August 2004 and science operations were completed 29 September 2005. The data analysis has proven deeper than expected as a result of two mutually reinforcing complications in gyroscope performance: (1) a changing polhode path affecting the calibration of the gyroscope scale factor C g against the aberration of starlight and (2) two larger than expected manifestations of a Newtonian gyro torque due to patch potentials on the rotor and housing. In earlier papers, we reported two methods, ‘geometric’ and ‘algebraic’, for identifying and removing the first Newtonian effect (‘misalignment torque’), and also a preliminary method of treating the second (‘roll-polhode resonance torque’). Central to the progress in both torque modeling and C g determination has been an extended effort on “Trapped Flux Mapping” commenced in November 2006. A turning point came in August 2008 when it became possible to include a detailed history of the resonance torques into the computation. The East-West (frame-dragging) effect is now plainly visible in the processed data. The current statistical uncertainty from an analysis of 155 days of data is 5.4 marc-s/yr (˜14% of the predicted effect), though it must be emphasized that this is a preliminary result requiring rigorous investigation of systematics by methods discussed in the accompanying paper by Muhlfelder et al. A covariance analysis incorporating models of the patch effect torques indicates that a 3-5% determination of frame-dragging is possible with more complete, computationally intensive data analysis.
Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration
NASA Technical Reports Server (NTRS)
Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.
1987-01-01
In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Numerical time-step restrictions as a result of capillary waves
NASA Astrophysics Data System (ADS)
Denner, Fabian; van Wachem, Berend G. M.
2015-03-01
The propagation of capillary waves on material interfaces between two fluids imposes a strict constraint on the numerical time-step applied to solve the equations governing this problem and is directly associated with the stability of interfacial flow simulations. The explicit implementation of surface tension is the generally accepted reason for the restrictions on the temporal resolution caused by capillary waves. In this article, a fully-coupled numerical framework with an implicit treatment of surface tension is proposed and applied, demonstrating that the capillary time-step constraint is in fact a constraint imposed by the temporal sampling of capillary waves, irrespective of the type of implementation. The presented results show that the capillary time-step constraint can be exceeded by several orders of magnitude, with the explicit as well as the implicit treatment of surface tension, if capillary waves are absent. Furthermore, a revised capillary time-step constraint is derived by studying the temporal resolution of capillary waves based on numerical stability and signal processing theory, including the Doppler shift caused by an underlying fluid motion. The revised capillary time-step constraint assures a robust, aliasing-free result, as demonstrated by representative numerical experiments, and is in the static case less restrictive than previously proposed time-step limits associated with capillary waves.
Zachry, Tiffany; Wulf, Gabriele; Mercer, John; Bezodis, Neil
2005-10-30
The performance and learning of motor skills has been shown to be enhanced if the performer adopts an external focus of attention (focus on the movement effect) compared to an internal focus (focus on the movements themselves) [G. Wulf, W. Prinz, Directing attention to movement effects enhances learning: a review, Psychon. Bull. Rev. 8 (2001) 648-660]. While most previous studies examining attentional focus effects have exclusively used performance outcome (e.g., accuracy) measures, in the present study electromyography (EMG) was used to determine neuromuscular correlates of external versus internal focus differences in movement outcome. Participants performed basketball free throws under both internal focus (wrist motion) and external focus (basket) conditions. EMG activity was recorded for m. flexor carpi radialis, m. biceps brachii, m. triceps triceps brachii, and m. deltoid of each participant's shooting arm. The results showed that free throw accuracy was greater when participants adopted an external compared to an internal focus. In addition, EMG activity of the biceps and triceps muscles was lower with an external relative to an internal focus. This suggests that an external focus of attention enhances movement economy, and presumably reduces "noise" in the motor system that hampers fine movement control and makes the outcome of the movement less reliable.
Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert
2016-08-10
Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282
NASA Astrophysics Data System (ADS)
Bozzoli, F.; Cattani, L.; Rainieri, S.; Zachár, A.
2015-11-01
In the last years, the attention of heat transfer equipments manufacturers turned toward helically coiled-tube heat exchangers, especially with regards to applications for viscous and/or particulate products. The recent progress achieved in numerical simulation motivated many research groups to develop numerical models for this kind of apparatuses. These models, intended both to improve the knowledge of the fundamental heat transfer mechanisms in curved geometries and to support the industrial design of this kind of apparatuses, are usually validated throughout the comparison with either theoretical or experimental evidences by considering average heat transfer performances. However, this approach doesn't guarantee that the validated models are able to reproduce local effects in details, which are so important in this kind of non-standard geometries. In the present paper a numerical model of convective heat transfer in coiled tubes for laminar flow regime was formulated and discussed. Its goodness was checked throughout the comparison with the latest experimental outcomes of Bozzoli et al. [1] in terms of convective heat flux distribution along the boundary of the duct, by ensuring the effectiveness of the model also in the description of local behaviours. Although the present paper reports only preliminary results of this simulation/validation process, it could be of interest for the research community because it proposes a novel approach that could be useful to validate many numerical models for nonstandard geometries.
A method for data handling numerical results in parallel OpenFOAM simulations
Anton, Alin; Muntean, Sebastian
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model
NASA Astrophysics Data System (ADS)
Petit, O.; Mulu, B.; Nilsson, H.; Cervantes, M.
2010-08-01
The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Älvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Wernet, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
NASA Technical Reports Server (NTRS)
Pline, Alexander D.; Werner, Mark P.; Hsieh, Kwang-Chung
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the United States Microgravity Laboratory-1 (USML-1) Spacelab mission planned for June, 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electric, two dimensional Particle Image Velocimetry (PIV) technique called Particle Displacement Tracking (PDT), which uses a simple space domain particle tracking algorithm. Results using the ground based STDCE hardware, with a radiant flux heating mode, and the PDT system are compared to numerical solutions obtained by solving the axisymmetric Navier Stokes equations with a deformable free surface. The PDT technique is successful in producing a velocity vector field and corresponding stream function from the raw video data which satisfactorily represents the physical flow. A numerical program is used to compute the velocity field and corresponding stream function under identical conditions. Both the PDT system and numerical results were compared to a streak photograph, used as a benchmark, with good correlation.
Recent Analytical and Numerical Results for The Navier-Stokes-Voigt Model and Related Models
NASA Astrophysics Data System (ADS)
Larios, Adam; Titi, Edriss; Petersen, Mark; Wingate, Beth
2010-11-01
The equations which govern the motions of fluids are notoriously difficult to handle both mathematically and computationally. Recently, a new approach to these equations, known as the Voigt-regularization, has been investigated as both a numerical and analytical regularization for the 3D Navier-Stokes equations, the Euler equations, and related fluid models. This inviscid regularization is related to the alpha-models of turbulent flow; however, it overcomes many of the problems present in those models. I will discuss recent work on the Voigt-regularization, as well as a new criterion for the finite-time blow-up of the Euler equations based on their Voigt-regularization. Time permitting, I will discuss some numerical results, as well as applications of this technique to the Magnetohydrodynamic (MHD) equations and various equations of ocean dynamics.
Experimental and numerical results on the fluid flow driven by a traveling magnetic field
NASA Astrophysics Data System (ADS)
Lantzsch, R.; Galindo, V.; Grants, I.; Zhang, C.; Pätzold, O.; Gerbeth, G.; Stelter, M.
2007-07-01
A traveling magnetic field (TMF) driven flow and its transition from a laminar to a time-dependent flow is studied by means of ultrasonic Doppler velocimetry and numerical simulations. The experimental setup comprises a cylindrical cavity containing the electrically conducting model fluid GaInSn and a system of six equidistant coils, which are fed by an out-of-phase current to create an up- or downward directed TMF. Hence, a Lorentz force is induced in the melt which leads to meridional flow patterns. For numerical simulations commercial codes (Opera/Fidap) and a spectral code are used. The characteristic parameters of the magnetohydrodynamic model system are chosen close to the conditions used for vertical gradient freeze (VGF) crystal growth. The axisymmetric basic flow and its dependence on the dimensionless shielding parameter S are examined. It is shown that, for S>10, the flow velocity decreases significantly, whereas almost no influence is found for a smaller shielding parameter. The critical Reynolds number for the onset of instability is found in the range of 300-450. Good agreement between experimental results and the numerical simulations is achieved.
NASA Astrophysics Data System (ADS)
Zueco, Joaquín; López-González, Luis María
2016-04-01
We have studied decompression processes when pressure changes that take place, in blood and tissues using a technical numerical based in electrical analogy of the parameters that involved in the problem. The particular problem analyzed is the behavior dynamics of the extravascular bubbles formed in the intercellular cavities of a hypothetical tissue undergoing decompression. Numerical solutions are given for a system of equations to simulate gas exchanges of bubbles after decompression, with particular attention paid to the effect of bubble size, nitrogen tension, nitrogen diffusivity in the intercellular fluid and in the tissue cell layer in a radial direction, nitrogen solubility, ambient pressure and specific blood flow through the tissue over the different molar diffusion fluxes of nitrogen per time unit (through the bubble surface, between the intercellular fluid layer and blood and between the intercellular fluid layer and the tissue cell layer). The system of nonlinear equations is solved using the Network Simulation Method, where the electric analogy is applied to convert these equations into a network-electrical model, and a computer code (electric circuit simulator, Pspice). In this paper, numerical results new (together to a network model improved with interdisciplinary electrical analogies) are provided.
Laboratory simulations of lidar returns from clouds: experimental and numerical results.
Zaccanti, G; Bruscaglioni, P; Gurioli, M; Sansoni, P
1993-03-20
The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. PMID:25744607
O'Brien, James Edward; Sohal, Manohar Singh; Huff, George Albert
2002-08-01
A combined experimental and numerical investigation is under way to investigate heat transfer enhancement techniques that may be applicable to large-scale air-cooled condensers such as those used in geothermal power applications. The research is focused on whether air-side heat transfer can be improved through the use of finsurface vortex generators (winglets,) while maintaining low heat exchanger pressure drop. A transient heat transfer visualization and measurement technique has been employed in order to obtain detailed distributions of local heat transfer coefficients on model fin surfaces. Pressure drop measurements have also been acquired in a separate multiple-tube row apparatus. In addition, numerical modeling techniques have been developed to allow prediction of local and average heat transfer for these low-Reynolds-number flows with and without winglets. Representative experimental and numerical results presented in this paper reveal quantitative details of local fin-surface heat transfer in the vicinity of a circular tube with a single delta winglet pair downstream of the cylinder. The winglets were triangular (delta) with a 1:2 height/length aspect ratio and a height equal to 90% of the channel height. Overall mean fin-surface Nusselt-number results indicate a significant level of heat transfer enhancement (average enhancement ratio 35%) associated with the deployment of the winglets with oval tubes. Pressure drop measurements have also been obtained for a variety of tube and winglet configurations using a single-channel flow apparatus that includes four tube rows in a staggered array. Comparisons of heat transfer and pressure drop results for the elliptical tube versus a circular tube with and without winglets are provided. Heat transfer and pressure-drop results have been obtained for flow Reynolds numbers based on channel height and mean flow velocity ranging from 700 to 6500.
2016-01-01
Abstract Objective: This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4® Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2–17 years of age, with diabetes. Research Design and Methods: Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6–17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. Results: In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. Conclusions: The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has
Zhang, Zhe; Ober, Ulrike; Erbe, Malena; Zhang, Hao; Gao, Ning; He, Jinlong; Li, Jiaqi; Simianer, Henner
2014-01-01
Utilizing the whole genomic variation of complex traits to predict the yet-to-be observed phenotypes or unobserved genetic values via whole genome prediction (WGP) and to infer the underlying genetic architecture via genome wide association study (GWAS) is an interesting and fast developing area in the context of human disease studies as well as in animal and plant breeding. Though thousands of significant loci for several species were detected via GWAS in the past decade, they were not used directly to improve WGP due to lack of proper models. Here, we propose a generalized way of building trait-specific genomic relationship matrices which can exploit GWAS results in WGP via a best linear unbiased prediction (BLUP) model for which we suggest the name BLUP|GA. Results from two illustrative examples show that using already existing GWAS results from public databases in BLUP|GA improved the accuracy of WGP for two out of the three model traits in a dairy cattle data set, and for nine out of the 11 traits in a rice diversity data set, compared to the reference methods GBLUP and BayesB. While BLUP|GA outperforms BayesB, its required computing time is comparable to GBLUP. Further simulation results suggest that accounting for publicly available GWAS results is potentially more useful for WGP utilizing smaller data sets and/or traits of low heritability, depending on the genetic architecture of the trait under consideration. To our knowledge, this is the first study incorporating public GWAS results formally into the standard GBLUP model and we think that the BLUP|GA approach deserves further investigations in animal breeding, plant breeding as well as human genetics. PMID:24663104
Zhang, Zhe; Ober, Ulrike; Erbe, Malena; Zhang, Hao; Gao, Ning; He, Jinlong; Li, Jiaqi; Simianer, Henner
2014-01-01
Utilizing the whole genomic variation of complex traits to predict the yet-to-be observed phenotypes or unobserved genetic values via whole genome prediction (WGP) and to infer the underlying genetic architecture via genome wide association study (GWAS) is an interesting and fast developing area in the context of human disease studies as well as in animal and plant breeding. Though thousands of significant loci for several species were detected via GWAS in the past decade, they were not used directly to improve WGP due to lack of proper models. Here, we propose a generalized way of building trait-specific genomic relationship matrices which can exploit GWAS results in WGP via a best linear unbiased prediction (BLUP) model for which we suggest the name BLUP|GA. Results from two illustrative examples show that using already existing GWAS results from public databases in BLUP|GA improved the accuracy of WGP for two out of the three model traits in a dairy cattle data set, and for nine out of the 11 traits in a rice diversity data set, compared to the reference methods GBLUP and BayesB. While BLUP|GA outperforms BayesB, its required computing time is comparable to GBLUP. Further simulation results suggest that accounting for publicly available GWAS results is potentially more useful for WGP utilizing smaller data sets and/or traits of low heritability, depending on the genetic architecture of the trait under consideration. To our knowledge, this is the first study incorporating public GWAS results formally into the standard GBLUP model and we think that the BLUP|GA approach deserves further investigations in animal breeding, plant breeding as well as human genetics.
NASA Technical Reports Server (NTRS)
Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.
2004-01-01
Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).
NASA Astrophysics Data System (ADS)
Mori, Takuro; Nakatani, Makoto; Tesfamariam, Solomon
2015-12-01
This paper presents analytical and numerical models for semirigid timber frame with Lagscrewbolt (LSB) connections. A series of static and reverse cyclic experimental tests were carried out for different beam sizes (400, 500, and 600 mm depth) and column-base connections with different numbers of LSBs (4, 5, 8). For the beam-column connections, with increase in beam depth, moment resistance and stiffness values increased, and ductility factor reduced. For the column-base connection, with increase in the number of LSBs, the strength, stiffness, and ductility values increased. A material model available in OpenSees, Pinching4 hysteretic model, was calibrated for all connection test results. Finally, analytical model of the portal frame was developed and compared with the experimental test results. Overall, there was good agreement with the experimental test results, and the Pinching4 hysteretic model can readily be used for full-scale structural model.
Numerical approach to constructing the lunar physical libration: results of the initial stage
NASA Astrophysics Data System (ADS)
Zagidullin, A.; Petrova, N.; Nefediev, Yu.; Usanin, V.; Glushkov, M.
2015-10-01
So called "main problem" it is taken as a model to develop the numerical approach in the theory of lunar physical libration. For the chosen model, there are both a good methodological basis and results obtained at the Kazan University as an outcome of the analytic theory construction. Results of the first stage in numerical approach are presented in this report. Three main limitation are taken to describe the main problem: -independent consideration of orbital and rotational motion of the Moon; - a rigid body model for the lunar body is taken and its dynamical figure is described by inertia ellipsoid, which gives us the mass distribution inside the Moon. - only gravitational interaction with the Earth and the Sun is considered. Development of selenopotential is limited on this stage by the second harmonic only. Inclusion of the 3-rd and 4-th order harmonics is the nearest task for the next stage.The full solution of libration problem consists of removing the below specified limitations: consideration of the fine effects, caused by planet perturbations, by visco-elastic properties of the lunar body, by the presence of a two-layer lunar core, by the Earth obliquity, by ecliptic rotation, if it is taken as a reference plane.
NASA Astrophysics Data System (ADS)
Lahaye, Noé; Paci, Alexandre; Smith, Stefan Llewellyn
2016-04-01
We examine the instability of lenticular vortices -- or lenses -- in a stratified rotating fluid. The simplest configuration is one in which the lenses overlay a deep layer and have a free surface, and this can be studied using a two-layer rotating shallow water model. We report results from laboratory experiments and high-resolution direct numerical simulations of the destabilization of vortices with constant potential vorticity, and compare these to a linear stability analysis. The stability properties of the system are governed by two parameters: the typical upper-layer potential vorticity and the size (depth) of the vortex. Good agreement is found between analytical, numerical and experimental results for the growth rate and wavenumber of the instability. The nonlinear saturation of the instability is associated with conversion from potential to kinetic energy and weak emission of gravity waves, giving rise to the formation of coherent vortex multipoles with trapped waves. The impact of flow in the lower layer is examined. In particular, it is shown that the growth rate can be strongly affected and the instability can be suppressed for certain types of weak co-rotating flow.
Noninvasive assessment of mitral inertness: clinical results with numerical model validation
NASA Technical Reports Server (NTRS)
Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.
2001-01-01
Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.
Numerical modelling of radon-222 entry into houses: an outline of techniques and results.
Andersen, C E
2001-05-14
Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor-outdoor pressure differences) and combined diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure experiments need to be extrapolated to more general situations (e.g. to real houses or even to other soil-gas pollutants). Finally, models provide a cost-effective test bench for improved designs of radon prevention systems. The paper includes a summary of transport equations and boundary conditions. As an illustrative example, radon entry is calculated for a standard slab-on-grade house.
Re-Computation of Numerical Results Contained in NACA Report No. 496
NASA Technical Reports Server (NTRS)
Perry, Boyd, III
2015-01-01
An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.
Cuzick, Jack; Pierotti, Paola; Cariaggi, Maria Paola; Palma, Paolo Dalla; Naldoni, Carlo; Ghiringhello, Bruno; Giorgi-Rossi, Paolo; Minucci, Daria; Parisio, Franca; Pojer, Ada; Schiboni, Maria Luisa; Sintoni, Catia; Zorzi, Manuel; Segnan, Nereo; Confortini, Massimo
2007-01-01
Objective To compare the accuracy of conventional cytology with liquid based cytology for primary screening of cervical cancer. Design Randomised controlled trial. Setting Nine screening programmes in Italy. Participants Women aged 25-60 attending for a new screening round: 22 466 were assigned to the conventional arm and 22 708 were assigned to the experimental arm. Interventions Conventional cytology compared with liquid based cytology and testing for human papillomavirus. Main outcome measure Relative sensitivity for cervical intraepithelial neoplasia of grade 2 or more at blindly reviewed histology, with atypical cells of undetermined significance or more severe cytology considered a positive result. Results In an intention to screen analysis liquid based cytology showed no significant increase in sensitivity for cervical intraepithelial neoplasia of grade 2 or more (relative sensitivity 1.17, 95% confidence interval 0.87 to 1.56) whereas the positive predictive value was reduced (relative positive predictive value v conventional cytology 0.58, 0.44 to 0.77). Liquid based cytology detected more lesions of grade 1 or more (relative sensitivity 1.68, 1.40 to 2.02), with a larger increase among women aged 25-34 (P for heterogeneity 0.0006), but did not detect more lesions of grade 3 or more (relative sensitivity 0.84, 0.56 to 1.25). Results were similar when only low grade intraepithelial lesions or more severe cytology were considered a positive result. No evidence was found of heterogeneity between centres or of improvement with increasing time from start of the study. The relative frequency of women with at least one unsatisfactory result was lower with liquid based cytology (0.62, 0.56 to 0.69). Conclusion Liquid based cytology showed no statistically significant difference in sensitivity to conventional cytology for detection of cervical intraepithelial neoplasia of grade 2 or more. More positive results were found, however, leading to a lower positive
Interpretation of high-dimensional numerical results for the Anderson transition
Suslov, I. M.
2014-12-15
The existence of the upper critical dimension d{sub c2} = 4 for the Anderson transition is a rigorous consequence of the Bogoliubov theorem on renormalizability of φ{sup 4} theory. For d ≥ 4 dimensions, one-parameter scaling does not hold and all existent numerical data should be reinterpreted. These data are exhausted by the results for d = 4, 5 from scaling in quasi-one-dimensional systems and the results for d = 4, 5, 6 from level statistics. All these data are compatible with the theoretical scaling dependences obtained from Vollhardt and Wolfle’s self-consistent theory of localization. The widespread viewpoint that d{sub c2} = ∞ is critically discussed.
Chaoticity threshold in magnetized plasmas: Numerical results in the weak coupling regime
Carati, A. Benfenati, F.; Maiocchi, A.; Galgani, L.; Zuin, M.
2014-03-15
The present paper is a numerical counterpart to the theoretical work [Carati et al., Chaos 22, 033124 (2012)]. We are concerned with the transition from order to chaos in a one-component plasma (a system of point electrons with mutual Coulomb interactions, in a uniform neutralizing background), the plasma being immersed in a uniform stationary magnetic field. In the paper [Carati et al., Chaos 22, 033124 (2012)], it was predicted that a transition should take place when the electron density is increased or the field decreased in such a way that the ratio ω{sub p}/ω{sub c} between plasma and cyclotron frequencies becomes of order 1, irrespective of the value of the so-called Coulomb coupling parameter Γ. Here, we perform numerical computations for a first principles model of N point electrons in a periodic box, with mutual Coulomb interactions, using as a probe for chaoticity the time-autocorrelation function of magnetization. We consider two values of Γ (0.04 and 0.016) in the weak coupling regime Γ ≪ 1, with N up to 512. A transition is found to occur for ω{sub p}/ω{sub c} in the range between 0.25 and 2, in fairly good agreement with the theoretical prediction. These results might be of interest for the problem of the breakdown of plasma confinement in fusion machines.
Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia
NASA Astrophysics Data System (ADS)
Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis
2014-05-01
A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.
NASA Astrophysics Data System (ADS)
Carrano, Charles S.; Rino, Charles L.
2016-06-01
We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.
Barbati, Alexander C; Kirby, Brian J
2016-07-01
We derive an approximate analytical representation of the conductivity for a 1D system with porous and charged layers grafted onto parallel plates. Our theory improves on prior work by developing approximate analytical expressions applicable over an arbitrary range of potentials, both large and small as compared to the thermal voltage (RTF). Further, we describe these results in a framework of simplifying nondimensional parameters, indicating the relative dominance of various physicochemical processes. We demonstrate the efficacy of our approximate expression with comparisons to numerical representations of the exact analytical conductivity. Finally, we utilize this conductivity expression, in concert with other components of the electrokinetic coupling matrix, to describe the streaming potential and electroviscous effect in systems with porous and charged layers.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Solar flare model: Comparison of the results of numerical simulations and observations
NASA Astrophysics Data System (ADS)
Podgorny, I. M.; Vashenyuk, E. V.; Podgorny, A. I.
2009-12-01
The electrodynamic flare model is based on numerical 3D simulations with the real magnetic field of an active region. An energy of ˜1032 erg necessary for a solar flare is shown to accumulate in the magnetic field of a coronal current sheet. The thermal X-ray source in the corona results from plasma heating in the current sheet upon reconnection. The hard X-ray sources are located on the solar surface at the loop foot-points. They are produced by the precipitation of electron beams accelerated in field-aligned currents. Solar cosmic rays appear upon acceleration in the electric field along a singular magnetic X-type line. The generation mechanism of the delayed cosmic-ray component is also discussed.
NASA Astrophysics Data System (ADS)
Xu, Hengyi; Heinzel, T.; Zozoulenko, I. V.
2011-09-01
We derive analytical expressions for the conductivity of bilayer graphene (BLG) using the Boltzmann approach within the the Born approximation for a model of Gaussian disorders describing both short- and long-range impurity scattering. The range of validity of the Born approximation is established by comparing the analytical results to exact tight-binding numerical calculations. A comparison of the obtained density dependencies of the conductivity with experimental data shows that the BLG samples investigated experimentally so far are in the quantum scattering regime where the Fermi wavelength exceeds the effective impurity range. In this regime both short- and long-range scattering lead to the same linear density dependence of the conductivity. Our calculations imply that bilayer and single-layer graphene have the same scattering mechanisms. We also provide an upper limit for the effective, density-dependent spatial extension of the scatterers present in the experiments.
NASA Astrophysics Data System (ADS)
Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.
2016-06-01
The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.
ERIC Educational Resources Information Center
Henle, James M.
This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…
Lima da Silva, M.; Sauvage, E.; Brun, P.; Gagnoud, A.; Fautrelle, Y.; Riva, R.
2013-07-01
The process of vitrification in a cold crucible heated by direct induction is used in the fusion of oxides. Its feature is the production of high-purity materials. The high-level of purity of the molten is achieved because this melting technique excludes the contamination of the charge by the crucible. The aim of the present paper is to analyze the hydrodynamic of the vitrification process by direct induction, with the focus in the effects associated with the interaction between the mechanical stirrer and bubbling. Considering the complexity of the analyzed system and the goal of the present work, we simplified the system by not taking into account the thermal and electromagnetic phenomena. Based in the concept of hydraulic similitude, we performed an experimental study and a numerical modeling of the simplified model. The results of these two studies were compared and showed a good agreement. The results presented in this paper in conjunction with the previous work contribute to a better understanding of the hydrodynamics effects resulting from the interaction between the mechanical stirrer and air bubbling in the cold crucible heated by direct induction. Further works will take into account thermal and electromagnetic phenomena in the presence of mechanical stirrer and air bubbling. (authors)
NASA Astrophysics Data System (ADS)
Dimitropoulos, Costas D.; Beris, Antony N.; Sureshkumar, R.; Handler, Robert A.
1998-11-01
This work continues our attempts to elucidate theoretically the mechanism of polymer-induced drag reduction through direct numerical simulations of turbulent channel flow, using an independently evaluated rheological model for the polymer stress. Using appropriate scaling to accommodate effects due to viscoelasticity reveals that there exists a great consistency in the results for different combinations of the polymer concentration and chain extension. This helps demonstrate that our obervations are applicable to very dilute systems, currently not possible to simulate. It also reinforces the hypothesis that one of the prerequisites for the phenomenon of drag reduction is sufficiently enhanced extensional viscosity, corresponding to the level of intensity and duration of extensional rates typically encountered during the turbulent flow. Moreover, these results motivate a study of the turbulence structure at larger Reynolds numbers and for different periodic computational cell sizes. In addition, the Reynolds stress budgets demonstrate that flow elasticity adversely affects the activities represented by the pressure-strain correlations, leading to a redistribution of turbulent kinetic energy amongst all directions. Finally, we discuss the influence of viscoelasticity in reducing the production of streamwise vorticity.
NASA Astrophysics Data System (ADS)
Beniaiche, Ahmed; Ghenaiet, Adel; Carcasci, Carlo; Facchini, Bruno
2016-05-01
This paper presents a numerical validation of the aero-thermal study of a 30:1 scaled model reproducing an innovative trailing edge with one row of enlarged pedestals under stationary and rotating conditions. A CFD analysis was performed by means of commercial ANSYS-Fluent modeling the isothermal air flow and using k-ω SST turbulence model and an isothermal air flow for both static and rotating conditions (Ro up to 0.23). The used numerical model is validated first by comparing the numerical velocity profiles distribution results to those obtained experimentally by means of PIV technique for Re = 20,000 and Ro = 0-0.23. The second validation is based on the comparison of the numerical results of the 2D HTC maps over the heated plate to those of TLC experimental data, for a smooth surface for a Reynolds number = 20,000 and 40,000 and Ro = 0-0.23. Two-tip conditions were considered: open tip and closed tip conditions. Results of the average Nusselt number inside the pedestal ducts region are presented too. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.
Liberatore, S.; Jaouen, S.; Tabakhoff, E.; Canaud, B.
2009-04-15
Magnetic Rayleigh-Taylor instability is addressed in compressible hydrostatic media. A full model is presented and compared to numerical results from a linear perturbation code. A perfect agreement between both approaches is obtained in a wide range of parameters. Compressibility effects are examined and substantial deviations from classical Chandrasekhar growth rates are obtained and confirmed by the model and the numerical calculations.
Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.
2011-01-01
Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Klapp, J.; Cervantes-Cota, J.; Chauvet, P.
1990-11-01
RESUMEN. A nivel cosmol6gico pensamos que se ha estado prodticiendo radiaci6n gravitacional en cantidades considerables dentro de las galaxias. Si los eventos prodnctores de radiaci6n gravitatoria han venido ocurriendo desde Ia epoca de Ia formaci6n de las galaxias, cuando menos, sus efectos cosmol6gicos pueden ser tomados en cuenta con simplicidad y elegancia al representar la producci6n de radiaci6n y, por consiguiente, su interacci6n con materia ordinaria fenomenol6gicamente a trave's de una ecuaci6n de estado politr6pica, como lo hemos mostrado en otros trabajos. Presentamos en este articulo resultados nunericos de este modelo. ABSTRACT A common believe in cosmology is that gravitational radiation in considerable quantities is being produced within the galaxies. Ifgravitational radiation production has been running since the galaxy formation epoch, at least, its cosmological effects can be assesed with simplicity and elegance by representing the production of radiation and, therefore, its interaction with ordinary matter phenomenologically through a polytropic equation of state as shown already elsewhere. We present in this paper the numerical results of such a model. K words: COSMOLOGY - GRAVITATION
NASA Astrophysics Data System (ADS)
Gorczyk, W.; Vogt, K.; Gerya, T.; Hobbs, B. E.
2012-12-01
It is becoming increasingly apparent that intense deformation, metamorphism and metasomatism occur within continental cratonic blocks far removed form subducting margins Such changes may occur intra-cratonically arising from lithospheric thickening and the development of gravitational instabilities, but mostly occur at the boundary of cratonic blocks. The contact of two cratons is characterized by rheological lateral variations within mantle-lithosphere and overlying crust. Tectonic stresses acting on craton/craton boundaries may lead to thinning or thickening due to delamination of the mantle lithosphere. This is reflected in tectonic deformation, topography evolution, melting and crustal metamorphism. To understand the controls on these processes a number of 2D, coupled petrological thermo-mechanical numerical experiments has been performed to test the response of a laterally weakened zone to a compressional regime. The results indicate that the presence of water-bearing minerals in the lithosphere and lower crust is essential to initiate melting, which in the later stages may expand to dry melting of crust and mantle. In the case of anhydrous crust and lithosphere, no melting occurs. Thus a variety of instabilities, melting behaviour and topographic responses occurs at the base of the lithosphere as well as intensive faulting and buckling in the crust dependent on the strength and "water" content of the lithosphere.
Cole, Richard W; Thibault, Marc; Bayles, Carol J; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Opansky, Cynthia; Schulz, Katherine; Brown, Claire M
2013-12-01
As part of an ongoing effort to increase image reproducibility and fidelity in addition to improving cross-instrument consistency, we have proposed using four separate instrument quality tests to augment the ones we have previously reported. These four tests assessed the following areas: (1) objective lens quality, (2) resolution, (3) accuracy of the wavelength information from spectral detectors, and (4) the accuracy and quality of spectral separation algorithms. Data were received from 55 laboratories located in 18 countries. The largest source of errors across all tests was user error which could be subdivided between failure to follow provided protocols and improper use of the microscope. This truly emphasizes the importance of proper rigorous training and diligence in performing confocal microscopy experiments and equipment evaluations. It should be noted that there was no discernible difference in quality between confocal microscope manufactures. These tests, as well as others previously reported, will help assess the quality of confocal microscopy equipment and will provide a means to track equipment performance over time. From 62 to 97% of the data sets sent in passed the various tests demonstrating the usefulness and appropriateness of these tests as part of a larger performance testing regiment.
NASA Astrophysics Data System (ADS)
Chan, P. W.
2009-03-01
The Hong Kong International Airport (HKIA) is situated in an area of complex terrain. Turbulent flow due to terrain disruption could occur in the vicinity of HKIA when winds from east to southwest climb over Lantau Island, a mountainous island to the south of the airport. Low-level turbulence is an aviation hazard to the aircraft flying into and out of HKIA. It is closely monitored using remote-sensing instruments including Doppler LIght Detection And Ranging (LIDAR) systems and wind profilers in the airport area. Forecasting of low-level turbulence by numerical weather prediction models would be useful in the provision of timely turbulence warnings to the pilots. The feasibility of forecasting eddy dissipation rate (EDR), a measure of turbulence intensity adopted in the international civil aviation community, is studied in this paper using the Regional Atmospheric Modelling System (RAMS). Super-high resolution simulation (within the regime of large eddy simulation) is performed with a horizontal grid size down to 50 m for some typical cases of turbulent airflow at HKIA, such as spring-time easterly winds in a stable boundary layer and gale-force southeasterly winds associated with a typhoon. Sensitivity of the simulation results with respect to the choice of turbulent kinetic energy (TKE) parameterization scheme in RAMS is also examined. RAMS simulation with Deardorff (1980) TKE scheme is found to give the best result in comparison with actual EDR observations. It has the potential for real-time forecasting of low-level turbulence in short-term aviation applications (viz. for the next several hours).
A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results
NASA Technical Reports Server (NTRS)
Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.
Numerical and experimental results on the spectral wave transfer in finite depth
NASA Astrophysics Data System (ADS)
Benassai, Guido
2016-04-01
Determination of the form of the one-dimensional surface gravity wave spectrum in water of finite depth is important for many scientific and engineering applications. Spectral parameters of deep water and intermediate depth waves serve as input data for the design of all coastal structures and for the description of many coastal processes. Moreover, the wave spectra are given as an input for the response and seakeeping calculations of high speed vessels in extreme sea conditions and for reliable calculations of the amount of energy to be extracted by wave energy converters (WEC). Available data on finite depth spectral form is generally extrapolated from parametric forms applicable in deep water (e.g., JONSWAP) [Hasselmann et al., 1973; Mitsuyasu et al., 1980; Kahma, 1981; Donelan et al., 1992; Zakharov, 2005). The present paper gives a contribution in this field through the validation of the offshore energy spectra transfer from given spectral forms through the measurement of inshore wave heights and spectra. The wave spectra on deep water were recorded offshore Ponza by the Wave Measurement Network (Piscopia et al.,2002). The field regressions between the spectral parameters, fp and the nondimensional energy with the fetch length were evaluated for fetch-limited sea conditions. These regressions gave the values of the spectral parameters for the site of interest. The offshore wave spectra were transfered from the measurement station offshore Ponza to a site located offshore the Gulf of Salerno. The offshore local wave spectra so obtained were transfered on the coastline with the TMA model (Bouws et al., 1985). Finally the numerical results, in terms of significant wave heights, were compared with the wave data recorded by a meteo-oceanographic station owned by Naples Hydrographic Office on the coastline of Salerno in 9m depth. Some considerations about the wave energy to be potentially extracted by Wave Energy Converters were done and the results were discussed.
NASA Astrophysics Data System (ADS)
Barnes, T.
In this article we review numerical studies of the quantum Heisenberg antiferromagnet on a square lattice, which is a model of the magnetic properties of the undoped “precursor insulators” of the high temperature superconductors. We begin with a brief pedagogical introduction and then discuss zero and nonzero temperature properties and compare the numerical results to analytical calculations and to experiment where appropriate. We also review the various algorithms used to obtain these results, and discuss algorithm developments and improvements in computer technology which would be most useful for future numerical work in this area. Finally we list several outstanding problems which may merit further investigation.
NASA Astrophysics Data System (ADS)
Aguiar, P.; González-Castaño, D. M.; Gómez, F.; Pardo-Montero, J.
2014-10-01
Liquid-filled ionisation chambers (LICs) are used in radiotherapy for dosimetry and quality assurance. Volume recombination can be quite important in LICs for moderate dose rates, causing non-linearities in the dose rate response of these detectors, and needs to be corrected for. This effect is usually described with Greening and Boag models for continuous and pulsed radiation respectively. Such models assume that the charge is carried by two different species, positive and negative ions, each of those species with a given mobility. However, LICs operating in non-ultrapure mode can contain different types of electronegative impurities with different mobilities, thus increasing the number of different charge carriers. If this is the case, Greening and Boag models can be no longer valid and need to be reformulated. In this work we present a theoretical and numerical study of volume recombination in parallel-plate LICs with multiple charge carrier species, extending Boag and Greening models. Results from a recent publication that reported three different mobilities in an isooctane-filled LIC have been used to study the effect of extra carrier species on recombination. We have found that in pulsed beams the inclusion of extra mobilities does not affect volume recombination much, a behaviour that was expected because Boag formula for charge collection efficiency does not depend on the mobilities of the charge carriers if the Debye relationship between mobilities and recombination constant holds. This is not the case in continuous radiation, where the presence of extra charge carrier species significantly affects the amount of volume recombination.
Preliminary results of numerical investigations at SECARB Cranfield, MS field test site
NASA Astrophysics Data System (ADS)
Choi, J.; Nicot, J.; Meckel, T. A.; Chang, K.; Hovorka, S. D.
2008-12-01
The Southeast Regional Carbon Sequestration partnership sponsored by DOE has chosen the Cranfield, MS field as a test site for its Phase II experiment. It will provide information on CO2 storage in oil and gas fields, in particular on storage permanence, storage capacity, and pressure buildup as well as on sweep efficiency. The 10,300 ft-deep reservoir produced 38 MMbbl of oil and 677 MMSCF of gas from the 1940's to the 1960's and is being retrofitted by Denbury Resources for tertiary recovery. CO2 injection started in July 2008 with a scheduled ramp up during the next few months. The Cranfield modeling team selected the northern section of the field for development of a numerical model using the multiphase-flow, compositional CMG-GEM software. Model structure was determined through interpretation of logs from old and recently-drilled wells and geophysical data. PETREL was used to upscale and export permeability and porosity data to the GEM model. Preliminary sensitivity analyses determined that relative permeability parameters and oil composition had the largest impact on CO2 behavior. The first modeling step consisted in history-matching the total oil, gas, and water production out of the reservoir starting from its natural state to determine the approximate current conditions of the reservoir. The fact that pressure recovered in the 40 year interval since end of initial production helps in constraining boundary conditions. In a second step, the modeling focused on understanding pressure evolution and CO2 transport in the reservoir. The presentation will introduce preliminary results of the simulations and confirm/explain discrepancies with field measurements.
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla
2015-11-01
Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit
NASA Astrophysics Data System (ADS)
Gliko, A. O.; Molodenskii, S. M.
2015-01-01
) are not only capable of significantly changing the magnitude of the radial displacements of the geoid but also altering their sign. Moreover, even in the uniform Earth's model, the effects of sphericity of its external surface and self-gravitation can also provide a noticeable contribution, which determines the signs of the coefficients in the expansion of the geoid's shape in the lower-order spherical functions. In order to separate these effects, below we present the results of the numerical calculations of the total effects of thermoelastic deformations for the two simplest models of spherical Earth without and with self-gravitation with constant density and complex-valued shear moduli and for the real Earth PREM model (which describes the depth distributions of density and elastic moduli for the high-frequency oscillations disregarding the rheology of the medium) and the modern models of the mantle rheology. Based on the calculations, we suggest the simplest interpretation of the present-day data on the relationship between the coefficients of spherical expansion of temperature, velocities of seismic body waves, the topography of the Earth's surface and geoid, and the data on the correlation between the lower-order coefficients in the expansions of the geoid and the corresponding terms of the expansions of horizontal inhomogeneities in seismic velocities. The suggested interpretation includes the estimates of the sign and magnitude for the ratios between the first coefficients of spherical expansions of seismic velocities, topography, and geoid. The presence of this correlation and the relationship between the signs and absolute values of these coefficients suggests that both the long-period oscillations of the geoid and the long-period variations in the velocities of seismic body waves are largely caused by thermoelastic deformations.
Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison Allen
We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Galvan, Boris; Miller, Stephen
2013-04-01
Fluid-rock interactions are mechanically fundamental to many earth processes, including fault zones and hydrothermal/volcanic systems, and to future green energy solutions such as enhanced geothermal systems and carbon capture and storage (CCS). Modeling these processes is challenging because of the strong coupling between rock fracture evolution and the consequent large changes in the hydraulic properties of the system. In this talk, we present results of a numerical model that includes poro-elastic plastic rheology (with hardening, softening, and damage), and coupled to a non-linear diffusion model for fluid pressure propagation and two-phase fluid flow. Our plane strain model is based on the poro- elastic plastic behavior of porous rock and is advanced with hardening, softening and damage using the Mohr- Coulomb failure criteria. The effective stress model of Biot (1944) is used for coupling the pore pressure and the rock behavior. Frictional hardening and cohesion softening are introduced following Vermeer and de Borst (1984) with the angle of internal friction and the cohesion as functions of the principal strain rates. The scalar damage coefficient is assumed to be a linear function of the hardening parameter. Fluid injection is modeled as a two phase mixture of water and air using the Richards equation. The theoretical model is solved using finite differences on a staggered grid. The model is benchmarked with experiments on the laboratory scale in which fluid is injected from below in a critically-stressed, dry sandstone (Stanchits et al. 2011). We simulate three experiments, a) the failure a dry specimen due to biaxial compressive loading, b) the propagation a of low pressure fluid front induced from the bottom in a critically stressed specimen, and c) the failure of a critically stressed specimen due to a high pressure fluid intrusion. Comparison of model results with the fluid injection experiments shows that the model captures most of the experimental
Sakai, Y. Telephone Corporation, Musashino-shi, Tokyo 180 ); Hawkins, R.J. ); Friberg, S.R. Telephone Corporation, Musashino-shi, Tokyo 180 )
1990-02-15
Using analytic theory and numerical experiments, we show that a quantum nondemolition measurement of the photon number of optical solitons in a single-mode optical fiber can be made. We describe the soliton-collision interferometer with which we propose to make this measurement and discuss simulations of the performance of this interferometer.
NASA Astrophysics Data System (ADS)
Ohno, Munekazu; Takaki, Tomohiro; Shibuta, Yasushi
2016-01-01
We present the variational formulation of a quantitative phase-field model for isothermal low-speed solidification in a binary dilute alloy with diffusion in the solid. In the present formulation, cross-coupling terms between the phase field and composition field, including the so-called antitrapping current, naturally arise in the time evolution equations. One of the essential ingredients in the present formulation is the utilization of tensor diffusivity instead of scalar diffusivity. In an asymptotic analysis, it is shown that the correct mapping between the present variational model and a free-boundary problem for alloy solidification with an arbitrary value of solid diffusivity is successfully achieved in the thin-interface limit due to the cross-coupling terms and tensor diffusivity. Furthermore, we investigate the numerical performance of the variational model and also its nonvariational versions by carrying out two-dimensional simulations of free dendritic growth. The nonvariational model with tensor diffusivity shows excellent convergence of results with respect to the interface thickness.
Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results
NASA Technical Reports Server (NTRS)
Lee, Nam C.; Parks, George K.
1992-01-01
A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.
Numerical model of the lowermost Mississippi River as an alluvial-bedrock reach: preliminary results
NASA Astrophysics Data System (ADS)
Viparelli, E.; Nittrouer, J. A.; Mohrig, D. C.; Parker, G.
2012-12-01
Recent field studies reveal that the river bed of the Lower Mississippi River is characterized by a transition from alluvium (upstream) to bedrock (downstream). In particular, in the downstream 250 km of the river, fields of actively migrating bedforms alternate with deep zones where a consolidated substratum is exposed. Here we present a first version of a one-dimensional numerical model able to capture the alluvial-bedrock transition in the lowermost Mississippi River, defined herein as the 500-km reach between the Old River Control Structure and the Gulf of Mexico. The flow is assumed to be steady, and the cross-section is divided in two regions, the river channel and the floodplain. The streamwise variation of channel and floodplain geometry is described with synthetic relations derived from field observations. Flow resistance in the river channel is computed with the formulation for low-slope, large sand bed rivers due to Wright and Parker, while a Chezy-type formulation is implemented on the floodplain. Sediment is modeled in terms of bed material and wash load. Suspended load is computed with the Wright-Parker formulation. This treatment allows either uniform sediment or a mixture of different grain sizes, and accounts for stratification effects. Bedload transport rates are estimated with the relation for sediment mixtures of Ashida and Michiue. Previous work documents reasonable agreement between these load relations and field measurements. Washload is routed through the system solving the equation of mass conservation of sediment in suspension in the water column. The gradual transition from the alluvial reach to the bedrock reach is modeled in terms of a "mushy" layer of specified thickness overlying the non-erodible substrate. In the case of a fully alluvial reach, the channel bed elevation is above this mushy layer, while in the case of partial alluvial cover of the substratum, the channel bed elevation is within the mushy layer. Variations in base
Ponderomotive stabilization of flute modes in mirrors Feedback control and numerical results
NASA Technical Reports Server (NTRS)
Similon, P. L.
1987-01-01
Ponderomotive stabilization of rigid plasma flute modes is numerically investigated by use of a variational principle, for a simple geometry, without eikonal approximation. While the near field of the studied antenna can be stabilizing, the far field has a small contribution only, because of large cancellation by quasi mode-coupling terms. The field energy for stabilization is evaluated and is a nonnegligible fraction of the plasma thermal energy. A new antenna design is proposed, and feedback stabilization is investigated. Their use drastically reduces power requirements.
Estimation of geopotential from satellite-to-satellite range rate data: Numerical results
NASA Technical Reports Server (NTRS)
Thobe, Glenn E.; Bose, Sam C.
1987-01-01
A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is
Interaction of a mantle plume and a segmented mid-ocean ridge: Results from numerical modeling
NASA Astrophysics Data System (ADS)
Georgen, Jennifer E.
2014-04-01
Previous investigations have proposed that changes in lithospheric thickness across a transform fault, due to the juxtaposition of seafloor of different ages, can impede lateral dispersion of an on-ridge mantle plume. The application of this “transform damming” mechanism has been considered for several plume-ridge systems, including the Reunion hotspot and the Central Indian Ridge, the Amsterdam-St. Paul hotspot and the Southeast Indian Ridge, the Cobb hotspot and the Juan de Fuca Ridge, the Iceland hotspot and the Kolbeinsey Ridge, the Afar plume and the ridges of the Gulf of Aden, and the Marion/Crozet hotspot and the Southwest Indian Ridge. This study explores the geodynamics of the transform damming mechanism using a three-dimensional finite element numerical model. The model solves the coupled steady-state equations for conservation of mass, momentum, and energy, including thermal buoyancy and viscosity that is dependent on pressure and temperature. The plume is introduced as a circular thermal anomaly on the bottom boundary of the numerical domain. The center of the plume conduit is located directly beneath a spreading segment, at a distance of 200 km (measured in the along-axis direction) from a transform offset with length 100 km. Half-spreading rate is 0.5 cm/yr. In a series of numerical experiments, the buoyancy flux of the modeled plume is progressively increased to investigate the effects on the temperature and velocity structure of the upper mantle in the vicinity of the transform. Unlike earlier studies, which suggest that a transform always acts to decrease the along-axis extent of plume signature, these models imply that the effect of a transform on plume dispersion may be complex. Under certain ranges of plume flux modeled in this study, the region of the upper mantle undergoing along-axis flow directed away from the plume could be enhanced by the three-dimensional velocity and temperature structure associated with ridge
Recent numerical results on double-layer simulation in high-intensity laser--plasma interaction
Szichman, H.
1988-06-01
Numerical studies on dynamic electric fields and double layers created inside of plasmas irradiated at laser intensities of 10/sup 17/ and 10/sup 18/ Wcm/sup 2/ were carried out using a macroscopic two-fluid model including nonlinear forces and the complete intensity dependent optical response for heating and dielectric force effects. This was possible only by longer computation times since the temporal and spatial step sizes had to be reduced accordingly. Electrostatic fields as high as 10/sup 9/ and 10/sup 10/ Vcm were, respectively, measured for both laser intensities and the coupling of irradiated electromagnetic waves to generate Langmuir longitudinal waves is shown to be possible for the first time. The development and production of the well-known density minima (cavitons) because of nonlinear forces is also confirmed, their prominent appearance being in direct relation to the stronger effect of the high irradiances applied.
Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results
NASA Astrophysics Data System (ADS)
GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.
2013-03-01
While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
NASA Astrophysics Data System (ADS)
Blecka, Maria I.
2010-05-01
The passive remote spectrometric methods are important in examinations the atmospheres of planets. The radiance spectra inform us about values of thermodynamical parameters and composition of the atmospheres and surfaces. The spectral technology can be useful in detection of the trace aerosols like biological substances (if present) in the environments of the planets. We discuss here some of the aspects related to the spectroscopic search for the aerosols and dust in planetary atmospheres. Possibility of detection and identifications of biological aerosols with a passive InfraRed spectrometer in an open-air environment is discussed. We present numerically simulated, based on radiative transfer theory, spectroscopic observations of the Earth atmosphere. Laboratory measurements of transmittance of various kinds of aerosols, pollens and bacterias were used in modeling.
NASA Technical Reports Server (NTRS)
Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.
2012-01-01
A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.
Ding, Lei; Van Renterghem, Timothy; Botteldooren, Dick; Horoshenkov, Kirill; Khan, Amir
2013-12-01
The influence of loose plant leaves on the acoustic absorption of a porous substrate is experimentally and numerically studied. Such systems are typical in vegetative walls, where the substrate has strong acoustical absorbing properties. Both experiments in an impedance tube and theoretical predictions show that when a leaf is placed in front of such a porous substrate, its absorption characteristics markedly change (for normal incident sound). Typically, there is an unaffected change in the low frequency absorption coefficient (below 250 Hz), an increase in the middle frequency absorption coefficient (500-2000 Hz) and a decrease in the absorption at higher frequencies. The influence of leaves becomes most pronounced when the substrate has a low mass density. A combination of the Biot's elastic frame porous model, viscous damping in the leaf boundary layers and plate vibration theory is implemented via a finite-difference time-domain model, which is able to predict accurately the absorption spectrum of a leaf above a porous substrate system. The change in the absorption spectrum caused by the leaf vibration can be modeled reasonably well assuming the leaf and porous substrate properties are uniform.
Mazza, Fabio; Vulcano, Alfonso
2008-07-08
For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.
NASA Astrophysics Data System (ADS)
Mazza, Fabio; Vulcano, Alfonso
2008-07-01
For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.
Preliminary Results from Numerical Experiments on the Summer 1980 Heat Wave and Drought
NASA Technical Reports Server (NTRS)
Wolfson, N.; Atlas, R.; Sud, Y. C.
1985-01-01
During the summer of 1980, a prolonged heat wave and drought affected the United States. A preliminary set of experiments has been conducted to study the effect of varying boundary conditions on the GLA model simulation of the heat wave. Five 10-day numerical integrations with three different specifications of boundary conditions were carried out: a control experiment which utilized climatological boundary conditions, an SST experiment which utilized summer 1980 sea-surface temperatures in the North Pacific, but climatological values elsewhere, and a Soil Moisture experiment which utilized the values of Mintz-Serafini for the summer, 1980. The starting dates for the five forecasts were 11 June, 7 July, 21 July, 22 August, and 6 September of 1980. These dates were specifically chosen as days when a heat wave was already established in order to investigate the effect of soil moistures or North Pacific sea-surface temperatures on the model's ability to maintain the heat wave pattern. The experiments were evaluated in terms of the heat wave index for the South Plains, North Plains, Great Plains and the entire U.S. In addition a subjective comparison of map patterns has been performed.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840
NASA Astrophysics Data System (ADS)
Li, Baishou; Gao, Yujiu
2015-12-01
The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.
Hidden modes in open disordered media: analytical, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Bliokh, Yury P.; Freilikher, Valentin; Shi, Z.; Genack, A. Z.; Nori, Franco
2015-11-01
We explore numerically, analytically, and experimentally the relationship between quasi-normal modes (QNMs) and transmission resonance (TR) peaks in the transmission spectrum of one-dimensional (1D) and quasi-1D open disordered systems. It is shown that for weak disorder there exist two types of the eigenstates: ordinary QNMs which are associated with a TR, and hidden QNMs which do not exhibit peaks in transmission or within the sample. The distinctive feature of the hidden modes is that unlike ordinary ones, their lifetimes remain constant in a wide range of the strength of disorder. In this range, the averaged ratio of the number of transmission peaks {N}{{res}} to the number of QNMs {N}{{mod}}, {N}{{res}}/{N}{{mod}}, is insensitive to the type and degree of disorder and is close to the value \\sqrt{2/5}, which we derive analytically in the weak-scattering approximation. The physical nature of the hidden modes is illustrated in simple examples with a few scatterers. The analogy between ordinary and hidden QNMs and the segregation of superradiant states and trapped modes is discussed. When the coupling to the environment is tuned by an external edge reflectors, the superradiance transition is reproduced. Hidden modes have been also found in microwave measurements in quasi-1D open disordered samples. The microwave measurements and modal analysis of transmission in the crossover to localization in quasi-1D systems give a ratio of {N}{{res}}/{N}{{mod}} close to \\sqrt{2/5}. In diffusive quasi-1D samples, however, {N}{{res}}/{N}{{mod}} falls as the effective number of transmission eigenchannels M increases. Once {N}{{mod}} is divided by M, however, the ratio {N}{{res}}/{N}{{mod}} is close to the ratio found in 1D.
NASA Technical Reports Server (NTRS)
Gogos, George; Pope, Daniel N.
2003-01-01
The problem considered is that of a single-component liquid fuel (n-heptane) droplet undergoing evaporation and combustion in a hot, convective, low pressure, zero-gravity environment of infinite expanse. For a moving droplet, the relative velocity (U(sub infinity)) between the droplet and freestream is subject to change due to the influence of the drag force on the droplet. For a suspended droplet, the relative velocity is kept constant. The governing equations for the gas-phase and the liquid-phase consist of the unsteady, axisymmetric equations of mass, momentum, species (gas-phase only) and energy conservation. Interfacial conservation equations are employed to couple the two phases. Variable properties are used in the gas- and liquid-phase. Multicomponent diffusion in the gas-phase is accounted for by solving the Stefan-Maxwell equations for the species diffusion velocities. A one-step overall reaction is used to model the combustion. The governing equations are discretized using the finite volume and SIMPLEC methods. A colocated grid is adopted. Hyperbolic tangent stretching functions are used to concentrate grid points near the fore and aft lines of symmetry and at the droplet surface in both the gas- and liquid-phase. The discretization equations are solved using the ADI method with the TDMA used on each line of the two alternating directions. Iterations are performed within each time-step until convergence is achieved. The grid spacing, size of the computational domain and time-step were tested to ensure that all solutions are independent of these parameters. A detailed discussion of the numerical model is given.
222Rn transport in a fractured crystalline rock aquifer: Results from numerical simulations
Folger, P.F.; Poeter, E.; Wanty, R.B.; Day, W.; Frishman, D.
1997-01-01
Dissolved 222Rn concentrations in ground water from a small wellfield underlain by fractured Middle Proterozoic Pikes Peak Granite southwest of Denver, Colorado range from 124 to 840 kBq m-3 (3360-22700 pCi L-1). Numerical simulations of flow and transport between two wells show that differences in equivalent hydraulic aperture of transmissive fractures, assuming a simplified two-fracture system and the parallel-plate model, can account for the different 222Rn concentrations in each well under steady-state conditions. Transient flow and transport simulations show that 222Rn concentrations along the fracture profile are influenced by 222Rn concentrations in the adjoining fracture and depend on boundary conditions, proximity of the pumping well to the fracture intersection, transmissivity of the conductive fractures, and pumping rate. Non-homogeneous distribution (point sources) of 222Rn parent radionuclides, uranium and 226Ra, can strongly perturb the dissolved 222Rn concentrations in a fracture system. Without detailed information on the geometry and hydraulic properties of the connected fracture system, it may be impossible to distinguish the influence of factors controlling 222Rn distribution or to determine location of 222Rn point sources in the field in areas where ground water exhibits moderate 222Rn concentrations. Flow and transport simulations of a hypothetical multifracture system consisting of ten connected fractures, each 10 m in length with fracture apertures ranging from 0.1 to 1.0 mm, show that 222Rn concentrations at the pumping well can vary significantly over time. Assuming parallel-plate flow, transmissivities of the hypothetical system vary over four orders of magnitude because transmissivity varies with the cube of fracture aperture. The extreme hydraulic heterogeneity of the simple hypothetical system leads to widely ranging 222Rn values, even assuming homogeneous distribution of uranium and 226Ra along fracture walls. Consequently, it is
NASA Astrophysics Data System (ADS)
Leblanc, James
In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.
Spiegal, R.J.
1984-08-01
For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict the resultant thermal response after exposure to EM fields. These calculations are difficult and involved because the human thermoregulatory system is very complex. In this paper, the important numerical models are reviewed and possibilities for future development are discussed.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.
Erratum: new numerical results and novel effective string predictions for Wilson loops
NASA Astrophysics Data System (ADS)
Billó, M.; Caselle, M.; Pellegrini, R.
2013-04-01
We correct a few misprints present in the published version, regarding eq.s (4.30), (4.35), (A.4) and (A.6). Plots and results of the paper are not affected since they were derived from the correct formulae.
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Domínguez, I.; Bacon, C.; Clifford, B.; Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.
2012-07-01
We describe a new astrophysical version of a cell-based adaptive mesh refinement code ALLA for reactive flow fluid dynamic simulations, including a new implementation of α-network nuclear kinetics, and present preliminary results of first three-dimensional simulations of incomplete carbon-oxygen detonation in Type Ia Supernovae.
Multi-Country Experience in Delivering a Joint Course on Software Engineering--Numerical Results
ERIC Educational Resources Information Center
Budimac, Zoran; Putnik, Zoran; Ivanovic, Mirjana; Bothe, Klaus; Zdravkova, Katerina; Jakimovski, Boro
2014-01-01
A joint course, created as a result of a project under the auspices of the "Stability Pact of South-Eastern Europe" and DAAD, has been conducted in several Balkan countries: in Novi Sad, Serbia, for the last six years in several different forms, in Skopje, FYR of Macedonia, for two years, for several types of students, and in Tirana,…
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
NASA Technical Reports Server (NTRS)
Rigby, D. L.; Vanfossen, G. J.
1992-01-01
A study of the effect of spanwise variation in momentum on leading edge heat transfer is discussed. Numerical and experimental results are presented for both a circular leading edge and a 3:1 elliptical leading edge. Reynolds numbers in the range of 10,000 to 240,000 based on leading edge diameter are investigated. The surface of the body is held at a constant uniform temperature. Numerical and experimental results with and without spanwise variations are presented. Direct comparison of the two-dimensional results, that is, with no spanwise variations, to the analytical results of Frossling is very good. The numerical calculation, which uses the PARC3D code, solves the three-dimensional Navier-Stokes equations, assuming steady laminar flow on the leading edge region. Experimentally, increases in the spanwise-averaged heat transfer coefficient as high as 50 percent above the two-dimensional value were observed. Numerically, the heat transfer coefficient was seen to increase by as much as 25 percent. In general, under the same flow conditions, the circular leading edge produced a higher heat transfer rate than the elliptical leading edge. As a percentage of the respective two-dimensional values, the circular and elliptical leading edges showed similar sensitivity to span wise variations in momentum. By equating the root mean square of the amplitude of the spanwise variation in momentum to the turbulence intensity, a qualitative comparison between the present work and turbulent results was possible. It is shown that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.
Yield criteria for porous media in plane strain: second-order estimates versus numerical results
NASA Astrophysics Data System (ADS)
Pastor, Joseph; Ponte Castañeda, Pedro
2002-11-01
This Note presents a comparison of some recently developed "second-order" homogenization estimates for two-dimensional, ideally plastic porous media subjected to plane strain conditions with corresponding yield analysis results using a new linearization technique and systematically optimized finite elements meshes. Good qualitative agreement is found between the second-order theory and the yield analysis results for the shape of the yield surfaces, which exhibit a corner on the hydrostatic axis, as well as for the dependence of the effective flow stress in shear on the porosity, which is found to be non-analytic in the dilute limit. Both of these features are inconsistent with the predictions of the standard Gurson model. To cite this article: J. Pastor, P. Ponte Castañeda, C. R. Mecanique 330 (2002) 741-747.
Preliminary numerical modeling results - cone penetrometer (CPT) tip used as an electrode
Ramirez, A L
2006-12-19
Figure 1 shows the resistivity models considered in this study; log10 of the resistivity is shown. The graph on the upper left hand side shows a hypothetical resisitivity well log measured along a well in the upper layered model; 10% Gaussian noise has been added to the well log data. The lower model is identical to the upper one except for one square area located within the second deepest layer. Figure 2 shows the electrode configurations considered. The ''reference'' case (upper frame) considers point electrodes located along the surface and along a vertical borehole. The ''CPT electrode'' case (middle frame) assumes that the CPT tip serves as an electrode that is electrically connected to the push rod; the surface electrodes are used in conjuction with the moving CPT electrode. The ''isolated CPT electrode'' case assumes that the electrode at the CPT tip is electrically isolated from the pushrod. Note that the separate CPT push rods in the middle and lower frames are shown separated to clarify the figure; in reality, there is only one pushrod that is changing length as the probe advances. Figure 3 shows three pole-pole measurement schemes were considered; in all cases, the ''get lost'' electrodes were the leftmost and rightmost surface electrodes. The top frame shows the reference scheme where all surface and borehole electrodes can be used. The middle frame shows two possible configurations available when a CPT mounted electrode is used. Note that only one of the four poles can be located along the borehole at any given time; electrode combinations such as the one depicted in blue (upper frame) are not possible in this case. The bottom frame shows a sample configuration where only the surface electrodes are used. Figure 4 shows the results obtained for the various measurement schemes. The white lines show the outline of the true model (shown in Figure 1, upper frame). The starting initial model for these inversions is based on the electrical resistivity log
Guo, Hanming; Zhuang, Songlin; Guo, Shuwen; Chen, Jiabi; Liang, Zhongcheng
2008-07-01
In terms of the electromagnetic theory described in Part I of our current investigations [J. Opt. Soc. Am. A24, 1776 (2007)], the numerical method for and results of numerical computations corresponding to the electromagnetic theory of a waveguide multilayered optical memory are presented. Here the characteristics of the cross talk and the modulation contrast, the power of readout signals, the variation of the power of the readout signals with the scanning position along the track, and the distribution of the light intensity at the detector are investigated in detail. Results show that the polarization of the reading light, the feature sizes of bits, and the distances between the two adjacent tracks and the two adjacent bits on the same track have significant effects on the distribution of the light intensity at the detector, the power of the readout signals, the cross talk, and the modulation contrast. In addition, the optimal polarization of the reading light is also suggested.
Wang, Zhan-Shan; Pan, Li-Bo
2014-03-01
The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.
Numerical predictions and experimental results of a dry bay fire environment.
Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca
2003-11-01
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
Urban Surface Network In Marseille: Network Optimization Using Numerical Simulations and Results
NASA Astrophysics Data System (ADS)
Pigeon, G.; Lemonsu, A.; Durand, P.; Masson, V.
During the ESCOMPTE program (Field experiment to constrain models of atmo- spheric pollution and emissions transport) in Marseille between june and july 2001 an important device has been set up to describe the urban boundary layer over the built-up aera of Marseille. There was notably a network of 20 temperature and humid- ity sensors which has mesured the spatial and temporal variability of these parameters. Before the experiment the arrangement of the network had been optimized to get the maximum of information about these two varaibilities. We have worked on results of high resolution simulations containing the TEB scheme which represents the energy budgets associated with the gobal street geometry of the mesh. First, a qualitative analysis had enabled the identification of the characteristical phenomenons over the town of Marseille. There are narrows links beetween urban effects and local effects : marine advection and orography. Then, a quantitative analysis of the field has been developped. EOF (empirical orthogonal functions) have been used to characterised the spatial and temporal structures of the field evolution. Instrumented axis have been determined with all these results. Finally, we have choosen very carefully the locations of the instruments at the scale of the street to avoid that micro-climatic effects interfere with the meso-scale effect of the town. The recording of the mesurements, every 10 minutes, had started on the 12th of june and had finished on the 16th of july. We did not get any problem with the instrument and so all the period has been recorded every 10 minutes. The analysis of the datas will be led on different way. First, will be done a temporal study. We want to determine if the times when occur phenomenons are linked to the location in the town. We will interest particulary to the warming during the morning and the cooling during the evening. Then, we will look for correlation between the temperature and mixing ratio with the wind
Numerical results for near surface time domain electromagnetic exploration: a full waveform approach
NASA Astrophysics Data System (ADS)
Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.
2015-12-01
Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two
NASA Astrophysics Data System (ADS)
Hughes, Scott; Flanagan, Eanna; Hinderer, Tanja; Ruangsri, Uchupol
2015-04-01
We describe how we have modified a frequency-domain Teukolsky-equation solver, previously used for computing orbit-averaged dissipation, in order to compute the dissipative piece of the gravitational self force on orbits of Kerr black holes. This calculation involves summing over a large number of harmonics. Each harmonic is independent of all others, so it is well suited to parallel computation. We show preliminary results for equatorial eccentric orbits and circular inclined orbits, demonstrating convergence of the harmonic expansion, as well as interesting phenomenology of the self force's behavior in the strong field. We conclude by discussing plans for using this force to study generic orbits, with a focus on the behavior of orbital resonances.
Restricted diffusion in a model acinar labyrinth by NMR: Theoretical and numerical results
NASA Astrophysics Data System (ADS)
Grebenkov, D. S.; Guillot, G.; Sapoval, B.
2007-01-01
A branched geometrical structure of the mammal lungs is known to be crucial for rapid access of oxygen to blood. But an important pulmonary disease like emphysema results in partial destruction of the alveolar tissue and enlargement of the distal airspaces, which may reduce the total oxygen transfer. This effect has been intensively studied during the last decade by MRI of hyperpolarized gases like helium-3. The relation between geometry and signal attenuation remained obscure due to a lack of realistic geometrical model of the acinar morphology. In this paper, we use Monte Carlo simulations of restricted diffusion in a realistic model acinus to compute the signal attenuation in a diffusion-weighted NMR experiment. We demonstrate that this technique should be sensitive to destruction of the branched structure: partial removal of the interalveolar tissue creates loops in the tree-like acinar architecture that enhance diffusive motion and the consequent signal attenuation. The role of the local geometry and related practical applications are discussed.
Active behavior of abdominal wall muscles: Experimental results and numerical model formulation.
Grasa, J; Sierra, M; Lauzeral, N; Muñoz, M J; Miana-Mena, F J; Calvo, B
2016-08-01
In the present study a computational finite element technique is proposed to simulate the mechanical response of muscles in the abdominal wall. This technique considers the active behavior of the tissue taking into account both collagen and muscle fiber directions. In an attempt to obtain the computational response as close as possible to real muscles, the parameters needed to adjust the mathematical formulation were determined from in vitro experimental tests. Experiments were conducted on male New Zealand White rabbits (2047±34g) and the active properties of three different muscles: Rectus Abdominis, External Oblique and multi-layered samples formed by three muscles (External Oblique, Internal Oblique, and Transversus Abdominis) were characterized. The parameters obtained for each muscle were incorporated into a finite strain formulation to simulate active behavior of muscles incorporating the anisotropy of the tissue. The results show the potential of the model to predict the anisotropic behavior of the tissue associated to fibers and how this influences on the strain, stress and generated force during an isometric contraction. PMID:27111629
NASA Technical Reports Server (NTRS)
Roussel-Dupre, Robert; Miller, Ronald H.
1993-01-01
The early-time evolution of plasmas moving across a background magnetic field is addressed with a 2D model in which a plasma cloud is assumed to have formed instantaneously with a velocity across a uniform background magnetic field and with a Gaussian density profile in the two dimensions perpendicular to the direction of motion. This model treats both the dynamics associated with the formation of a polarization field and the generation and propagation of electromagnetic waves. In general, the results indicate that, to zeroth order, the plasma cloud behaves like a large dipole antenna oriented in the direction of the polarization field which oscillates at frequencies defined by the normal mode of the system. Radiation damping is shown to play an important role in defining the plasma cloud evolution, causing a rapid decay of the polarizaiton field and a loss of plasma kinetic energy and momentum on time scales comprable to several ion gyroperiods. Scaling laws are derived for the plasma momentum and energy loss rates, and predictions for the braking time, the amplitude and spectrum of the radiation field, and the total radiated power are presented for conditions relevant to the recent Combined Release and Radiation Effects Satellite experiments.
Active behavior of abdominal wall muscles: Experimental results and numerical model formulation.
Grasa, J; Sierra, M; Lauzeral, N; Muñoz, M J; Miana-Mena, F J; Calvo, B
2016-08-01
In the present study a computational finite element technique is proposed to simulate the mechanical response of muscles in the abdominal wall. This technique considers the active behavior of the tissue taking into account both collagen and muscle fiber directions. In an attempt to obtain the computational response as close as possible to real muscles, the parameters needed to adjust the mathematical formulation were determined from in vitro experimental tests. Experiments were conducted on male New Zealand White rabbits (2047±34g) and the active properties of three different muscles: Rectus Abdominis, External Oblique and multi-layered samples formed by three muscles (External Oblique, Internal Oblique, and Transversus Abdominis) were characterized. The parameters obtained for each muscle were incorporated into a finite strain formulation to simulate active behavior of muscles incorporating the anisotropy of the tissue. The results show the potential of the model to predict the anisotropic behavior of the tissue associated to fibers and how this influences on the strain, stress and generated force during an isometric contraction.
Insight into collision zone dynamics from topography: numerical modelling results and observations
NASA Astrophysics Data System (ADS)
Bottrill, A. D.; van Hunen, J.; Allen, M. B.
2012-11-01
Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB) is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.
Insight into collision zone dynamics from topography: numerical modelling results and observations
NASA Astrophysics Data System (ADS)
Bottrill, A. D.; van Hunen, J.; Allen, M. B.
2012-07-01
Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) deepening in the area of the back arc-basin after initial collision. This collisional mantle dynamic basin (CMDB) is caused by slab steepening drawing material away from the base of the overriding plate. Also during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate causes the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. This uplift and subsidence pattern correlates well with our modelled topography changes.
The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations
NASA Technical Reports Server (NTRS)
Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.
2003-01-01
We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.
Kam, Seung I.; Gauglitz, Phillip A. ); Rossen, William R.
2000-12-01
The goal of this study is to fit model parameters to changes in waste level in response to barometric pressure changes in underground storage tanks at the Hanford Site. This waste compressibility is a measure of the quantity of gas, typically hydrogen and other flammable gases that can pose a safety hazard, retained in the waste. A one-dimensional biconical-pore-network model for compressibility of a bubbly slurry is presented in a companion paper. Fitting these results to actual waste level changes in the tanks implies that bubbles are long in the slurry layer and the ratio of pore-body radius to pore-throat radius is close to one; unfortunately, capillary effects can not be quantified unambiguously from the data without additional information on pore geometry. Therefore determining the quantity of gas in the tanks requires more than just slurry volume data. Similar ambiguity also exists with two other simple models: a capillary-tube model with contact angle hysteresis and spherical-p ore model.
Chaotic escape from an open vase-shaped cavity. I. Numerical and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison; Keeler, Matthew L.; Giefer, Joshua; Delos, John B.
2012-01-01
We present part I in a two-part study of an open chaotic cavity shaped as a vase. The vase possesses an unstable periodic orbit in its neck. Trajectories passing through this orbit escape without return. For our analysis, we consider a family of trajectories launched from a point on the vase boundary. We imagine a vertical array of detectors past the unstable periodic orbit and, for each escaping trajectory, record the propagation time and the vertical detector position. We find that the escape time exhibits a complicated recursive structure. This recursive structure is explored in part I of our study. We present an approximation to the Helmholtz equation for waves escaping the vase. By choosing a set of detector points, we interpolate trajectories connecting the source to the different detector points. We use these interpolated classical trajectories to construct the solution to the wave equation at a detector point. Finally, we construct a plot of the detector position versus the escape time and compare this graph to the results of an experiment using classical ultrasound waves. We find that generally the classical trajectories organize the escaping ultrasound waves.
Numerical modeling of anti-icing systems and comparison to test results on a NACA 0012 airfoil
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Potapczuk, Mark G.
1993-01-01
A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experimental results were generally obtained for the surface temperature and the possibility for the runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.
Numerical Modeling of Anti-icing Systems and Comparison to Test Results on a NACA 0012 Airfoil
NASA Technical Reports Server (NTRS)
Al-Khalil, Kamel M.; Potapczuk, Mark G.
1993-01-01
A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experiment results were generally obtained for the surface temperature and the possibility for each runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.
NASA Astrophysics Data System (ADS)
Florens, Serge; Snyman, Izak
2015-11-01
We analyze the spatial correlation structure of the spin density of an electron gas in the vicinity of an antiferromagnetically coupled Kondo impurity. Our analysis extends to the regime of spin-anisotropic couplings, where there are no quantitative results for spatial correlations in the literature. We use an original and numerically exact method, based on a systematic coherent-state expansion of the ground state of the underlying spin-boson Hamiltonian. It has not yet been applied to the computation of observables that are specific to the fermionic Kondo model. We also present an important technical improvement to the method that obviates the need to discretize modes of the Fermi sea, and allows one to tackle the problem in the thermodynamic limit. As a result, one can obtain excellent spatial resolution over arbitrary length scales, for a relatively low computational cost, a feature that gives the method an advantage over popular techniques such as the numerical and density-matrix renormalization groups. We find that the anisotropic Kondo model shows rich universal scaling behavior in the spatial structure of the entanglement cloud. First, SU(2) spin-symmetry is dynamically restored in a finite domain in the parameter space in the vicinity of the isotropic line, as expected from poor man's scaling. More surprisingly, we are able to obtain in closed analytical form a set of different, yet universal, scaling curves for strong exchange asymmetry, which are parametrized by the longitudinal exchange coupling. Deep inside the cloud, i.e., for distances smaller than the Kondo length, the correlation between the electron spin density and the impurity spin oscillates between ferromagnetic and antiferromagnetic values at the scale of the Fermi wavelength, an effect that is drastically enhanced at strongly anisotropic couplings. Our results also provide further numerical checks and alternative analytical approximations for the Kondo overlaps that were recently computed by
NASA Astrophysics Data System (ADS)
Gnutzmann, Sven; Seif, Burkhard
2004-05-01
In a series of two papers we investigate the universal spectral statistics of chaotic quantum systems in the ten known symmetry classes of quantum mechanics. In this first paper we focus on the construction of appropriate ensembles of star graphs in the ten symmetry classes. A generalization of the Bohigas-Giannoni-Schmit conjecture is given that covers all these symmetry classes. The conjecture is supported by numerical results that demonstrate the fidelity of the spectral statistics of star graphs to the corresponding Gaussian random-matrix theories.
NASA Astrophysics Data System (ADS)
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem
2015-10-01
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
Shazlee, Muhammad Kashif; Ali, Muhammad; SaadAhmed, Muhammad; Hussain, Ammad; Hameed, Kamran; Lutfi, Irfan Amjad; Khan, Muhammad Tahir
2016-01-01
Objective: To study the diagnostic accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma. Methods: A total of 61 patients with 63 ocular injuries were assessed during July 2013 to January 2014. All patients were referred to the department of Radiology from Emergency Room since adequate clinical assessment of the fundus was impossible because of the presence of opaque ocular media. Based on radiological diagnosis, the patients were provided treatment (surgical or medical). Clinical diagnosis was confirmed during surgical procedures or clinical follow-up. Results: A total of 63 ocular injuries were examined in 61 patients. The overall sensitivity was 91.5%, Specificity was 98.87%, Positive predictive value was 87.62 and Negative predictive value was 99%. Conclusion: Ultrasound B-scan is a sensitive, non invasive and rapid way of assessing intraocular damage caused by blunt or penetrating eye injuries. PMID:27182245
NASA Astrophysics Data System (ADS)
Wu, Yang; Kelly, Damien P.
2014-12-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.
NASA Astrophysics Data System (ADS)
Musa, A. B.
2015-05-01
The study is about impact of a short elastic rod (or slug) on a stationary semi-infinite viscoelastic rod. The viscoelastic materials are modeled as standard linear solid which involve three material parameters and the motion is treated as one-dimensional. We first establish the governing equations pertaining to the impact of viscoelastic materials subject to certain boundary conditions for the case when an elastic slug moving at a speed V impacts a semi-infinite stationary viscoelastic rod. The objective is to validate the numerical results of stresses and velocities at the interface following wave transmissions and reflections in the slug after the impact using viscoelastic discontinuity. If the stress at the interface becomes tensile and the velocity changes its sign, then the slug and the rod part company. If the stress at the interface is compressive after the impact, the slug and the rod remain in contact. After modelling the impact and solve the governing system of partial differential equations in the Laplace transform domain, we invert the Laplace transformed solution numerically to obtain the stresses and velocities at the interface for several viscosity time constants and ratios of acoustic impedances. In inverting the Laplace transformed equations, we used the complex inversion formula because there is a branch cut and infinitely many poles within the Bromwich contour. In the viscoelastic discontinuity analysis, we look at the moving discontinuities in stress and velocity using the impulse-momentum relation and kinematical condition of compatibility. Finally, we discussed the relationship of the stresses and velocities using numeric and the validated stresses and velocities using viscoelastic discontinuity analysis.
Wu, Yang; Kelly, Damien P.
2014-01-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf’s treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of Un and Vn type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of Δρ and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by 2πm/Δρ, where m is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system. PMID
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
Meyer, H. O.
The PINTEX group studied proton-proton and proton-deuteron scattering and reactions between 100 and 500 MeV at the Indiana University Cyclotron Facility (IUCF). More than a dozen experiments made use of electron-cooled polarized proton or deuteron beams, orbiting in the 'Indiana Cooler' storage ring, and of a polarized atomic-beam target of hydrogen or deuterium in the path of the stored beam. The collaboration involved researchers from several midwestern universities, as well as a number of European institutions. The PINTEX program ended when the Indiana Cooler was shut down in August 2002. The website contains links to some of the numerical results, descriptions of experiments, and a complete list of publications resulting from PINTEX.
NASA Astrophysics Data System (ADS)
Fontana, A.; Marzari, F.
2016-05-01
Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to
Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H
2013-01-01
High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to
Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H
2013-01-01
High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to
Monsanglant, C.; Audi, G.; Conreur, G.; Cousin, R.; Doubre, H.; Jacotin, M.; Henry, S.; Kepinski, J.-F.; Lunney, D.; Saint Simon, M. de; Thibault, C.; Toader, C.; Bollen, G.; Lebee, G.; Scheidenberger, C.; Borcea, C.; Duma, M.; Kluge, H.-J.; Le Scornet, G.
1999-11-16
MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na, Mg, Al, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Todesco, M.; Neri, A.; Esposti Ongaro, T.; Tola, E.; Rocco, G.
2011-12-01
We present a new DVD of the INGV outreach series, aimed at illustrating our research work on pyroclastic flow modeling. Pyroclastic flows (or pyroclastic density currents) are hot, devastating clouds of gas and ashes, generated during explosive eruptions. Understanding their dynamics and impact is crucial for a proper hazard assessment. We employ a 3D numerical model which describes the main features of the multi-phase and multi-component process, from the generation of the flows to their propagation along complex terrains. Our numerical results can be translated into color animations, which describe the temporal evolution of flow variables such as temperature or ash concentration. The animations provide a detailed and effective description of the natural phenomenon which can be used to present this geological process to a general public and to improve the hazard perception in volcanic areas. In our DVD, the computer animations are introduced and commented by professionals and researchers who deals at various levels with the study of pyroclastic flows and their impact. Their comments are taken as short interviews, mounted in a short video (about 10 minutes), which describes the natural process, as well as the model and its applications to some explosive volcanoes like Vesuvio, Campi Flegrei, Mt. St. Helens and Soufriere Hills (Montserrat). The ensemble of different voices and faces provides a direct sense of the multi-disciplinary effort involved in the assessment of pyroclastic flow hazard. The video also introduces the people who address this complex problem, and the personal involvement beyond the scientific results. The full, uncommented animations of the pyroclastic flow propagation on the different volcanic settings are also provided in the DVD, that is meant to be a general, flexible outreach tool.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
G. L. Hawkes; J. E. O'Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna
2008-06-01
A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less
NASA Astrophysics Data System (ADS)
Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.
2015-12-01
The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.
NASA Technical Reports Server (NTRS)
Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.
2003-01-01
A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.
Classification accuracy improvement
NASA Technical Reports Server (NTRS)
Kistler, R.; Kriegler, F. J.
1977-01-01
Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.
NASA Astrophysics Data System (ADS)
Magyar, Rudolph
2013-06-01
We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Hand, J. W.; Li, Y.; Hajnal, J. V.
2010-02-01
Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB <= 2 W kg-1 (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR10g and average foetal temperature are within international safety limits. For continuous RF exposure at SARMWB = 2 W kg-1 over periods of 7.5 min or longer, a maximum local foetal temperature >38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.
Ermolaev, B.S.; Novozhilov, B.V.; Posvyanskii, V.S.; Sulimov, A.A.
1986-03-01
The authors analyze the results of a numerical simulation of the convective burning of explosive powders in the presence of increasing pressure. The formulation of the problem reproduces a typical experimental technique: a strong closed vessel with a channel uniformly filled with the explosive investigated is fitted with devices for initiating and recording the process of explosion. It is shown that the relation between the propagation velocities of the flame and the compression waves in the powder and the rate of pressure increase in the combustion zone is such that a narrow compaction zone is formed ahead of the ignition front. Another important result is obtained by analyzing the difference between the flame velocity and the gas flow velocity in the ignition front. A model of the process is given. The results of the investigation throw light on such aspects of the convective combustion mechanism and the transition from combustion to detonation as the role of compaction of the explosive in the process of flame propogation and the role of the rate of pressure increase and dissipative heating of the gas phase in the pores ahead of the ignition front.
NASA Astrophysics Data System (ADS)
Chen, R.; Pagonis, V.; Lawless, J. L.
2006-02-01
Nonmonotonic dose dependence of optically stimulated luminescence (OSL) has been reported in a number of materials including Al2O3:C which is one of the main dosimetric materials. In a recent work, the nonmonotonic effect has been shown to result, under certain circumstances, from the competition either during excitation or during readout between trapping states or recombination centers. In the present work, we report on a study of the effect in a more concrete framework of two trapping states and two kinds of recombination centers involved in the luminescence processes in Al2O3:C. Using sets of trapping parameters, based on available experimental data, previously utilized to explain the nonmonotonic dose dependence of thermoluminescence including nonzero initial occupancies of recombination centers (F+ centers), the OSL along with the occupancies of the relevant traps and centers are simulated numerically. The connection between these different resulting quantities is discussed, giving a better insight as to the ranges of the increase and decrease of the integral OSL as a function of dose, as well as the constant equilibrium value occurring at high doses.
Prexl, A.; Hoffmann, H.; Golle, M.; Kudrass, S.; Wahl, M.
2011-01-17
Springback prediction and compensation is nowadays a widely recommended discipline in finite element modeling. Many researches have shown an improvement of the accuracy in prediction of springback using advanced modeling techniques, e.g. by including the Bauschinger effect. In this work different models were investigated in the commercial simulation program AutoForm for a large series production part, manufactured from the dual phase steel HC340XD. The work shows the differences between numerical drawbead models and geometrically modeled drawbeads. Furthermore, a sensitivity analysis was made for a reduced kinematic hardening model, implemented in the finite element program AutoForm.
C. Monsanglant; C. Toader; G. Audi; G. Bollen; C. Borcea; G. Conreur; R. Cousin; H. Doubre; M. Duma; M. Jacotin; S. Henry; J.-F. Kepinski; H.-J. Kluge; G. Lebee; G. Le Scornet; D. Lunney; M. de Saint Simon; C. Scheidenberger; C. Thibault
1999-12-31
MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na{clubsuit}, Mg, Al{clubsuit}, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.
NASA Technical Reports Server (NTRS)
Peltier, L. J.; Biringen, S.
1993-01-01
The present numerical simulation explores a thermal-convective mechanism for oscillatory thermocapillary convection in a shallow Cartesian cavity for a Prandtl number 6.78 fluid. The computer program developed for this simulation integrates the two-dimensional, time-dependent Navier-Stokes equations and the energy equation by a time-accurate method on a stretched, staggered mesh. Flat free surfaces are assumed. The instability is shown to depend upon temporal coupling between large scale thermal structures within the flow field and the temperature sensitive free surface. A primary result of this study is the development of a stability diagram presenting the critical Marangoni number separating steady from the time-dependent flow states as a function of aspect ratio for the range of values between 2.3 and 3.8. Within this range, a minimum critical aspect ratio near 2.3 and a minimum critical Marangoni number near 20,000 are predicted below which steady convection is found.
NASA Astrophysics Data System (ADS)
Chirkov, V. A.; Komarov, D. K.; Stishkov, Y. K.; Vasilkov, S. A.
2015-10-01
The paper studies a particular electrode system, two flat parallel electrodes with a dielectric plate having a small circular hole between them. Its main feature is that the region of the strong electric field is located far from metal electrode surfaces, which permits one to preclude the injection charge formation and to observe field-enhanced dissociation (the Wien effect) leading to the emergence of electrohydrodynamic (EHD) flow. The described electrode system was studied by way of both computer simulation and experiment. The latter was conducted with the help of the particle image velocimetry (or PIV) technique. The numerical research used trusted software package COMSOL Multiphysics, which allows solving the complete set of EHD equations and obtaining the EHD flow structure. Basing on the computer simulation and the comparison with experimental investigation results, it was concluded that the Wien effect is capable of causing intense (several centimeters per second) EHD flows in low-conducting liquids and has to be taken into account when dealing with EHD devices.
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
2010-01-01
Background The mitosporic fungus Trichoderma harzianum (Hypocrea, Ascomycota, Hypocreales, Hypocreaceae) is an ubiquitous species in the environment with some strains commercially exploited for the biological control of plant pathogenic fungi. Although T. harzianum is asexual (or anamorphic), its sexual stage (or teleomorph) has been described as Hypocrea lixii. Since recombination would be an important issue for the efficacy of an agent of the biological control in the field, we investigated the phylogenetic structure of the species. Results Using DNA sequence data from three unlinked loci for each of 93 strains collected worldwide, we detected a complex speciation process revealing overlapping reproductively isolated biological species, recent agamospecies and numerous relict lineages with unresolved phylogenetic positions. Genealogical concordance and recombination analyses confirm the existence of two genetically isolated agamospecies including T. harzianum sensu stricto and two hypothetical holomorphic species related to but different from H. lixii. The exact phylogenetic position of the majority of strains was not resolved and therefore attributed to a diverse network of recombining strains conventionally called 'pseudoharzianum matrix'. Since H. lixii and T. harzianum are evidently genetically isolated, the anamorph - teleomorph combination comprising H. lixii/T. harzianum in one holomorph must be rejected in favor of two separate species. Conclusions Our data illustrate a complex speciation within H. lixii - T. harzianum species group, which is based on coexistence and interaction of organisms with different evolutionary histories and on the absence of strict genetic borders between them. PMID:20359347
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
NASA Astrophysics Data System (ADS)
Trzaska, S.; Moron, V.; Fontaine, B.
1996-10-01
This article investigates through numerical experiments the controversial question of the impact of El Niño-Southern Oscillation (ENSO) phenomena on climate according to large-scale and regional-scale interhemispheric thermal contrast. Eight experiments (two considering only inversed Atlantic thermal anomalies and six combining ENSO warm phase with large-scale interhemispheric contrast and Atlantic anomaly patterns) were performed with the Météo-France atmospheric general circulation model. The definition of boundary conditions from observed composites and principal components is presented and preliminary results concerning the month of August, especially over West Africa and the equatorial Atlantic are discussed. Results are coherent with observations and show that interhemispheric and regional scale sea-surface-temperature anomaly (SST) patterns could significantly modulate the impact of ENSO phenomena: the impact of warm-phase ENSO, relative to the atmospheric model intercomparison project (AMIP) climatology, seems stronger when embedded in global and regional SSTA patterns representative of the post-1970 conditions [i.e. with temperatures warmer (colder) than the long-term mean in the southern hemisphere (northern hemisphere)]. Atlantic SSTAs may also play a significant role. Acknowledgements. We gratefully appreciate the on-line DMSP database facility at APL (Newell et al., 1991) from which this study has benefited greatly. We wish to thank E. Friis-Christensen for his encouragement and useful discussions. A. Y. would like to thank the Danish Meteorological Institute, where this work was done, for its hospitality during his stay there and the Nordic Baltic Scholarship Scheme for its financial support of this stay. Topical Editor K.-H. Glassmeier thanks M. J. Engebretson and H. Lühr for their help in evaluating this paper.--> Correspondence to: A. Yahnin-->
NASA Astrophysics Data System (ADS)
Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina
2012-03-01
Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.
NASA Astrophysics Data System (ADS)
Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.
2008-12-01
An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
NASA Astrophysics Data System (ADS)
Lo Russo, S.; Taddia, G.; Gnavi, L.
2012-04-01
KEY WORDS: Open-loop ground water heat pump; Feflow; Low-enthalpy; Thermal Affected Zone; Turin; Italy The increasing diffusion of low-enthalpy geothermal open-loop Groundwater Heat Pumps (GWHP) providing buildings air conditioning requires a careful assessment of the overall effects on groundwater system, especially in the urban areas where several plants can be close together and interfere. One of the fundamental aspects in the realization of an open loop low-enthalpy geothermal system is therefore the capacity to forecast the effects of thermal alteration produced in the ground, induced by the geothermal system itself. The impact on the groundwater temperature in the surrounding area of the re-injection well (Thermal Affected Zone - TAZ) is directly linked to the aquifer properties. The transient dynamic of groundwater discharge and temperature variations should be also considered to assess the subsurface environmental effects of the plant. The experimental groundwater heat pump system used in this study is installed at the "Politecnico di Torino" (NW Italy, Piedmont Region). This plant provides summer cooling needs for the university buildings. This system is composed by a pumping well, a downgradient injection well and a control piezometer. The system is constantly monitored by multiparameter probes measuring the dynamic of groundwater temperature. A finite element subsurface flow and transport simulator (FEFLOW) was used to investigate the thermal aquifer alteration. Simulations were continuously performed during May-October 2010 (cooling period). The numerical simulation of the heat transport in the aquifer was solved with transient conditions. The simulation was performed by considering only the heat transfer within the saturated aquifer, without any heat dispersion above or below the saturated zone due to the lack of detailed information regarding the unsaturated zone. Model results were compared with experimental temperature data derived from groundwater
NASA Astrophysics Data System (ADS)
Declair, Stefan; Stephan, Klaus; Potthast, Roland
2015-04-01
Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs). In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology strongly support the TSOs by developing innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key in the energy prediction process chain is the numerical weather prediction (NWP) system. With focus on wind energy, we face the model errors in the planetary boundary layer, which is characterized by strong spatial and temporal fluctuations in wind speed, to improve the basis of the weather dependent renewable energy prediction. Model data can be corrected by postprocessing techniques such as model output statistics and calibration using historical observational data. On the other hand, latest observations can be used in a preprocessing technique called data assimilation (DA). In DA, the model output from a previous time step is combined such with observational data, that the new model data for model integration initialization (analysis) fits best to the latest model data and the observational data as well. Therefore, model errors can be already reduced before the model integration. In this contribution, the results of an impact study are presented. A so-called OSSE (Observation Simulation System Experiment) is performed using the convective-resoluted COSMO-DE model of the German Weather Service and a 4D-DA technique, a Newtonian relaxation method also called nudging. Starting from a nature run (treated as the truth), conventional observations and artificial wind observations at hub height are generated. In a control run, the basic model setup of the nature run is slightly perturbed to drag the model away from the beforehand generated truth and a free forecast is computed based on the analysis using only conventional
NASA Astrophysics Data System (ADS)
Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.
2015-06-01
Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an
Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis
NASA Technical Reports Server (NTRS)
Slojkowski, Steven E.
2014-01-01
Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.
Sprenger, Lisa Lange, Adrian; Odenbach, Stefan
2013-12-15
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
NASA Astrophysics Data System (ADS)
Sprenger, Lisa; Lange, Adrian; Odenbach, Stefan
2013-12-01
Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (ST) of 0.16 K-1. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K-1 in the analytical case and 0.29 K-1 in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.
NASA Technical Reports Server (NTRS)
Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.
1989-01-01
Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.
Numerical simulation of small perturbation transonic flows
NASA Technical Reports Server (NTRS)
Seebass, A. R.; Yu, N. J.
1976-01-01
The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.
Developing a Weighted Measure of Speech Sound Accuracy
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2010-01-01
Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344
NASA Astrophysics Data System (ADS)
Khan, Sheema; Morton, Thomas L.; Ronis, David
1987-05-01
The static correlations in highly charged colloidal and micellar suspensions, with and without added electrolyte, are examined using the hypernetted-chain approximation (HNC) for the macro-ion-macro-ion correlations and the mean-spherical approximation for the other correlations. By taking the point-ion limit for the counter-ions, an analytic solution for the counter-ion part of the problem can be obtained; this maps the macro-ion part of the problem onto a one-component problem where the macro-ions interact via a screened Coulomb potential with the Gouy-Chapman form for the screening length and an effective charge that depends on the macro-ion-macro-ion pair correlations. Numerical solutions of the effective one-component equation in the HNC approximation are presented, and in particular, the effects of macro-ion charge, nonadditive core diameters, and added electrolyte are examined. As we show, there can be a strong renormalization of the effective macro-ion charge and reentrant melting in colloidal crystals.
Numerical orbit generators of artificial earth satellites
NASA Astrophysics Data System (ADS)
Kugar, H. K.; Dasilva, W. C. C.
1984-04-01
A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.
NASA Astrophysics Data System (ADS)
Deplano, V.; Pelissier, R.; Rieu, R.; Bontoux, P.
1994-01-01
Bifurcations are vascular singularities of interest because they are the privileged sites of atherosclerosis deposits, particularly the sites corresponding to wall shear stress extrema. The purpose of this paper is to compare the two- and three-dimensional characteristics of the velocity fields, the shear stress distributions and the secondary flows in a symmetrical aortic bifurcation. The branching angle is equal to 60^{circ} and the branch-to-trunk area ratio to 0.8. The numerical simulations are performed using the FIDAP programme. Although restrictive by the hypotheses of steady flow and rigid channel, with rectangular cross-sections, this study shows the importance of the three-dimensional effects in particular as far as concerned the wall shear stress behaviours. Les bifurcations sont des singularités vasculaires présentant un intérêt particulier car elles sont le site privilégié de dépôts athéromateux ; la localisation de ces dépôts dépendant des valeurs maximum du cisaillement en paroi. L'objectif de cette étude est de comparer les caractéristiques bidimensionnels et tridimensionnels des champs de vitesse, de la distribution du cisaillement pariétal et des écoulements secondaires dans un modèle de bifurcation aortique. L'angle de bifurcation est de 60^{circ} et le rapport des sections branche fille branche mère est de 0,8. Les simulations numériques sont effectuées sur la base du logiciel FIDAP. Bien que restrictifs de part certaines hypothèses, écoulement permanent dans un modèle de bifurcation rigide avec des sections rectangulaires, ces travaux montrent l'importance des effets tridimensionnels notamment au niveau du cisaillement pariétal.
NASA Astrophysics Data System (ADS)
Losiak, Anna; Czechowski, Leszek; Velbel, Michael A.
2015-12-01
Gypsum, a mineral that requires water to form, is common on the surface of Mars. Most of it originated before 3.5 Gyr when the Red Planet was more humid than now. However, occurrences of gypsum dune deposits around the North Polar Residual Cap (NPRC) seem to be surprisingly young: late Amazonian in age. This shows that liquid water was present on Mars even at times when surface conditions were as cold and dry as the present-day. A recently proposed mechanism for gypsum formation involves weathering of dust within ice (e.g., Niles, P.B., Michalski, J. [2009]. Nat. Geosci. 2, 215-220.). However, none of the previous studies have determined if this process is possible under current martian conditions. Here, we use numerical modelling of heat transfer to show that during the warmest days of the summer, solar irradiation may be sufficient to melt pure water ice located below a layer of dark dust particles (albedo ⩽ 0.13) lying on the steepest sections of the equator-facing slopes of the spiral troughs within martian NPRC. During the times of high irradiance at the north pole (every 51 ka; caused by variation of orbital and rotational parameters of Mars e.g., Laskar, J. et al. [2002]. Nature 419, 375-377.) this process could have taken place over larger parts of the spiral troughs. The existence of small amounts of liquid water close to the surface, even under current martian conditions, fulfils one of the main requirements necessary to explain the formation of the extensive gypsum deposits around the NPRC. It also changes our understanding of the degree of current geological activity on Mars and has important implications for estimating the astrobiological potential of Mars.
On the Spatial and Temporal Accuracy of Overset Grid Methods for Moving Body Problems
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
A study of numerical attributes peculiar to an overset grid approach to unsteady aerodynamics prediction is presented. Attention is focused on the effect of spatial error associated with interpolation of intergrid boundary conditions and temporal error associated with explicit update of intergrid boundary points on overall solution accuracy. A set of numerical experiments are used to verify whether or not the use of simple interpolation for intergrid boundary conditions degrades the formal accuracy of a conventional second-order flow solver, and to quantify the error associated with explicit updating of intergrid boundary points. Test conditions correspond to the transonic regime. The validity of the numerical results presented here are established by comparison with existing numerical results of documented accuracy, and by direct comparison with experimental results.
NASA Astrophysics Data System (ADS)
Rawat, A.; Aucan, J.; Ardhuin, F.
2012-12-01
All sea level variations of the order of 1 cm at scales under 30 km are of great interest for the future Surface Water Ocean Topography (SWOT) satellite mission. That satellite should provide high-resolution maps of the sea surface height for analysis of meso to sub-mesoscale currents, but that will require a filtering of all gravity wave motions in the data. Free infragravity waves (FIGWs) are generated and radiate offshore when swells and/or wind seas and their associated bound infragravity waves impact exposed coastlines. Free infragravity waves have dominant periods comprised between 1 and 10 minutes and horizontal wavelengths of up to tens of kilometers. Given the length scales of the infragravity waves wavelength and amplitude, the infragravity wave field will can a significant fraction the signal measured by the future SWOT mission. In this study, we analyze the data from recovered bottom pressure recorders of the Deep-ocean Assessment and Reporting of Tsunami (DART) program. This analysis includes data spanning several years between 2006 and 2010, from stations at different latitudes in the North and South Pacific, the North Atlantic, the Gulf of Mexico and the Caribbean Sea. We present and discuss the following conclusions: (1) The amplitude of free infragravity waves can reach several centimeters, higher than the precision sought for the SWOT mission. (2) The free infragravity signal is higher in the Eastern North Pacific than in the Western North Pacific, possibly due to smaller incident swell and seas impacting the nearby coastlines. (3) Free infragravity waves are higher in the North Pacific than in the North Atlantic, possibly owing to different average continental shelves configurations in the two basins. (4) There is a clear seasonal cycle at the high latitudes North Atlantic and Pacific stations that is much less pronounced or absent at the tropical stations, consistent with the generation mechanism of free infragravity waves. Our numerical model
NASA Astrophysics Data System (ADS)
Wildman, R. D.; Jenkins, J. T.; Krouskop, P. E.; Talbot, J.
2006-07-01
A comparison of the predictions of a simple kinetic theory with experimental and numerical results for a vibrated granular bed consisting of nearly elastic particles of two sizes has been performed. The results show good agreement between the data sets for a range of numbers of each size of particle, and are particularly good for particle beds containing similar proportions of each species. The agreement suggests that such a model may be a good starting point for describing polydisperse systems of granular flows.
NASA Astrophysics Data System (ADS)
Bushenkova, N.; Chervov, V.; Koulakov, I.
2010-12-01
In this study we investigate recent structure of the lithosphere and dynamics of the sub-lithosphere mantle beneath a large part of Eurasia based on results of seismic tomography and numerical modeling. The study area includes rigid old lithospheric blocks, such as Siberian Craton, Tarim plate, remnant parts of the Tuva-Mongolia continent, as well as more recent structures such as West-Siberian plate and orogenic belts in southern Siberia. Thickness of the lithosphere was estimated based on the regional tomographic model by Koulakov and Bushenkova (2010) using a method described by Bushenkova et al., (2008). These estimates were used to define the boundary conditions in numerical modeling. To reduce the marginal effects, the modeling area was considerably enlarged with Russian, North- and South China plates. However, the Indian plate and its movement were not taken into account in this model. The numerical modeling was performed in a spherical segment limited by latitude 0 E-150 E, longitude 0-80 N and depth 0-700 km using a regular grid of 151x81x36 and time step of 10 Ma. Here we solve numerically the Navier-Stokes equations using the Oberbeck-Boussinesq approximation in spherical coordinates. In our model viscosity depends on the pressure and temperature. The modeling shows that ascending flows and higher temperature (up to 100 degrees) are usually associated with thick lithosphere of cratons. These flows determine the shapes of convective cells far outside the craton and generate another ascending flow in non-cratonic areas. The areas with thin lithosphere are usually associated with descending flows and colder mantle. One of examples is an area between Siberian craton to the north and Tarim and North China plates to the south where the estimated thickness of the lithosphere is between 40-75 km. There we observe descending flows in the numerical model and lower temperatures according to the tomography result. Besides the tomography results, the numerical model
Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
NASA Astrophysics Data System (ADS)
Ji, B.; Peng, X. X.; Long, X. P.; Luo, X. W.; Wu, Y. L.
2015-12-01
Results of cavitating turbulent flow simulation around a twisted hydrofoil were presented in the paper using the Partially-Averaged Navier-Stokes (PANS) method (Ji et al. 2013a), Large-Eddy Simulation (LES) (Ji et al. 2013b) and Reynolds-Averaged Navier-Stokes (RANS). The results are compared with available experimental data (Foeth 2008). The PANS and LES reasonably reproduce the cavitation shedding patterns around the twisted hydrofoil with primary and secondary shedding, while the RANS model fails to simulate the unsteady cavitation shedding phenomenon and yields an almost steady flow with a constant cavity shape and vapor volume. Besides, it is noted that the predicted shedding vapor cavity by PANS is more turbulent and the shedding vortex is stronger than that by LES, which is more consistent with experimental photos.
Influence of the quantum well models on the numerical simulation of planar InGaN/GaN LED results
NASA Astrophysics Data System (ADS)
Podgórski, J.; Woźny, J.; Lisik, Z.
2016-04-01
Within this paper, we present electric model of a light emitting diode (LED) made of gallium nitride (GaN) followed by examples of simulation results obtained by means of Sentaurus software, which is the part of the TCAD package. The aim of this work is to answer the question of whether physical models of quantum wells used in commercial software are suitable for a correct analysis of the lateral LEDs made of GaN.
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numercial Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular archi-tecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electrical-like model of this control system, using inexpensive development frameworks, and has been tested and validated with the available experimental data. The objective of this work is to analyse and simulate long-term effects and gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairement which may put in jeopardy a long-term mission is also evaluated. . Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying continuosly from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobic ex-ercise and thermal stress simulating an extra
Lockwood, M.; Owens, M.
2009-08-20
We survey observations of the radial magnetic field in the heliosphere as a function of position, sunspot number, and sunspot cycle phase. We show that most of the differences between pairs of simultaneous observations, normalized using the square of the heliocentric distance and averaged over solar rotations, are consistent with the kinematic 'flux excess' effect whereby the radial component of the frozen-in heliospheric field is increased by longitudinal solar wind speed structure. In particular, the survey shows that, as expected, the flux excess effect at high latitudes is almost completely absent during sunspot minimum but is almost the same as within the streamer belt at sunspot maximum. We study the uncertainty inherent in the use of the Ulysses result that the radial field is independent of heliographic latitude in the computation of the total open solar flux: we show that after the kinematic correction for the excess flux effect has been made it causes errors that are smaller than 4.5%, with a most likely value of 2.5%. The importance of this result for understanding temporal evolution of the open solar flux is reviewed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Dumas, Catherine
1993-01-01
A computational fluid dynamics (CFD) model has been applied to study the transient flow phenomena of the nozzle and exhaust plume of the Space Shuttle Main Engine (SSME), fired at sea level. The CFD model is a time accurate, pressure based, reactive flow solver. A six-species hydrogen/oxygen equilibrium chemistry is used to describe the chemical-thermodynamics. An adaptive upwinding scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the temporal solution. Both engine start-up and shut-down processes were simulated. The elapse time is approximately five seconds for both cases. The computed results were animated and compared with the test. The images for the animation were created with PLOT3D and FAST and then animated with ABEKAS. The hysteresis effects, and the issues of free-shock separation, restricted-shock separation and the end-effects were addressed.
NASA Technical Reports Server (NTRS)
Boville, Byron A.; Baumhefner, David P.
1990-01-01
Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.
Ndiaye, L G; Caillat, S; Chinnayya, A; Gambier, D; Baudoin, B
2010-07-01
In order to simulate granular materials structure in a rotary kiln under the steady-state regime, a mathematical model has been developed by Saeman (1951). This model enables the calculation of the bed profiles, the axial velocity and solids flow rate along the kiln. This model can be coupled with a thermochemical model, in the case of a reacting moving bed. This dynamic model was used to calculate the bed profile for an industrial size kiln and the model projections were validated by measurements in a 4 m diameter by 16 m long industrial rotary kiln. The effect of rotation speed under solids bed profile and the effect of the feed rate under filling degree were established. On the basis of the calculations and the experimental results a phenomenological relation for the residence time estimation was proposed for the rotary kiln.
NASA Technical Reports Server (NTRS)
Durisen, R. H.
1975-01-01
Improved viscous evolutionary sequences of differentially rotating, axisymmetric, nonmagnetic, zero-temperature white-dwarf models are constructed using the relativistically corrected degenerate electron viscosity. The results support the earlier conclusion that angular momentum transport due to viscosity does not lead to overall uniform rotation in many interesting cases. Qualitatively different behaviors are obtained, depending on how the total mass M and angular momentum J compare with the M and J values for which uniformly rotating models exist. Evolutions roughly determine the region in M and J for which models with a particular initial angular momentum distribution can reach carbon-ignition densities in 10 b.y. Such models may represent Type I supernova precursors.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Randol, Brent M.; Christian, Eric R.
2016-03-01
A parametric study is performed using the electrostatic simulations of Randol and Christian in which the number density, n, and initial thermal speed, θ, are varied. The range of parameters covers an extremely broad plasma regime, all the way from the very weak coupling of space plasmas to the very strong coupling of solid plasmas. The first result is that simulations at the same ΓD, where ΓD (∝ n1/3θ-2) is the plasma coupling parameter, but at different combinations of n and θ, behave exactly the same. As a function of ΓD, the form of p(v), the velocity distribution function of v, the magnitude of v, the velocity vector, is studied. For intermediate to high ΓD, heating is observed in p(v) that obeys conservation of energy, and a suprathermal tail is formed, with a spectral index that depends on ΓD. For strong coupling (ΓD≫1), the form of the tail is v-5, consistent with the findings of Randol and Christian). For weak coupling (ΓD≪1), no acceleration or heating occurs, as there is no free energy. The dependence on N, the number of particles in the simulation, is also explored. There is a subtle dependence in the index of the tail, such that v-5 appears to be the N→∞ limit.
NASA Technical Reports Server (NTRS)
Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.
1993-01-01
The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.
SLAC E155 and E155x Numeric Data Results and Data Plots: Nucleon Spin Structure Functions
The nucleon spin structure functions g1 and g2 are important tools for testing models of nucleon structure and QCD. Experiments at CERN, DESY, and SLAC have measured g1 and g2 using deep inelastic scattering of polarized leptons on polarized nucleon targets. The results of these experiments have established that the quark component of the nucleon helicity is much smaller than naive quark-parton model predictions. The Bjorken sum rule has been confirmed within the uncertainties of experiment and theory. The experiment E155 at SLAC collected data in March and April of 1997. Approximately 170 million scattered electron events were recorded to tape. (Along with several billion inclusive hadron events.) The data were collected using three independent fixed-angle magnetic spectrometers, at approximately 2.75, 5.5, and 10.5 degrees. The momentum acceptance of the 2.75 and 5.5 degree spectrometers ranged from 10 to 40 GeV, with momentum resolution of 2-4%. The 10.5 degree spectrometer, new for E155, accepted events of 7 GeV to 20 GeV. Each spectrometer used threshold gas Cerenkov counters (for particle ID), a segmented lead-glass calorimeter (for energy measurement and particle ID), and plastic scintillator hodoscopes (for tracking and momentum measurement). The polarized targets used for E155 were 15NH3 and 6LiD, as targets for measuring the proton and deuteron spin structure functions respectively. Experiment E155x recently concluded a successful two-month run at SLAC. The experiment was designed to measure the transverse spin structure functions of the proton and deuteron. The E155 target was also recently in use at TJNAF's Hall C (E93-026) and was returned to SLAC for E155x. E155x hopes to reduce the world data set errors on g2 by a factor of three. [Copied from http://www.slac.stanford.edu/exp/e155/e155_nickeltour.html, an information summary linked off the E155 home page at http://www.slac.stanford.edu/exp/e155/e155_home.html. The extension run, E155x, also makes
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)
NASA Technical Reports Server (NTRS)
Westphalen, H.; Spjeldvik, W. N.
1982-01-01
A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.
NASA Technical Reports Server (NTRS)
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
NASA Technical Reports Server (NTRS)
Back, L. H.
1972-01-01
The laminar flow equations in differential form are solved numerically on a digital computer for flow of a very high temperature gas through the entrance region of an externally cooled tube. The solution method is described and calculations are carried out in conjunction with experimental measurements. The agreement with experiment is good, with the result indicating relatively large energy and momentum losses in the highly cooled flows considered where the pressure is nearly uniform along the flow and the core flow becomes non-adiabatic a few diameters downstream of the inlet. The effects of a large range of Reynolds number and Mach number (viscous dissipation) are also investigated.
NASA Astrophysics Data System (ADS)
Rijkhorst, Erik-Jan
2005-12-01
The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.
A comparison of implicit numerical methods for solving the transient spherical diffusion equation
NASA Technical Reports Server (NTRS)
Curry, D. M.
1977-01-01
Comparative numerical temperature results obtained by using two implicit finite difference procedures for the solution of the transient diffusion equation in spherical coordinates are presented. The validity and accuracy of these solutions are demonstrated by comparison with exact analytical solutions.
NASA Astrophysics Data System (ADS)
van Gent, H. W.; Abe, S.; Urai, J. L.; Holland, M.
2009-04-01
The formation of open cavities as a result of (normal) faulting of a brittle material under low effective stress has profound effects on the hydraulic properties of rocks both near the surface and at depth. It is however often difficult to access the fault zone directly. Here we present the results from a series of analogue models of normal faults in brittle rocks. Fine grained, dry Hemihydrate powder (CaSO4 * 1/2 H2O) was used as the truly cohesive analogue material. An extensive characterization of material properties, including the porosity dependency of both tensile strength and cohesion, showed the increase of strength of the powder with burial in the experimental box. In side view observations of the analogue models three structural zones were distinguished; a pure tensile failure zone at the surface and pure shear failure zone near the bottom of the box. At mid-depths we observed a transitional zone with mixed mode failure and the formation of fault cavities. These cavities initiate at local dip-changes of the fault and can collapse with progressive deformation. The transitions between these zones can be directly related to the increase of material strength due to burial compaction. The intercalation of relatively softer sand layers and relatively stronger layers of a hemihydrate and graphite mixture resulted in a marked increase of the complexity of the fault zone, but the three structural zones remain clearly visible. The sand layers can form decollement surfaces and "sand-smears". The observed structures compare well with fault outcrops and fault related cave systems in carbonates, basalts and consolidated sandstone. We used Particle Image Velocimetry (PIV) to quantify deformation and strain and observed plastic deformation prior to brittle failure at increments to small for visual inspection. However, the forces involved remain largely unknown. Therefore we have used the Discrete Element Method (DEM) to numerically model the formation of open fractures
NASA Technical Reports Server (NTRS)
Cabra, R.; Chen, J. Y.; Dibble, R. W.; Myhrvold, T.; Karpetis, A. N.; Barlow, R. S.
2002-01-01
An experiment and numerical investigation is presented of a lifted turbulent H2/N2 jet flame in a coflow of hot, vitiated gases. The vitiated coflow burner emulates the coupling of turbulent mixing and chemical kinetics exemplary of the reacting flow in the recirculation region of advanced combustors. It also simplifies numerical investigation of this coupled problem by removing the complexity of recirculating flow. Scalar measurements are reported for a lifted turbulent jet flame of H2/N2 (Re = 23,600, H/d = 10) in a coflow of hot combustion products from a lean H2/Air flame ((empty set) = 0.25, T = 1,045 K). The combination of Rayleigh scattering, Raman scattering, and laser-induced fluorescence is used to obtain simultaneous measurements of temperature and concentrations of the major species, OH, and NO. The data attest to the success of the experimental design in providing a uniform vitiated coflow throughout the entire test region. Two combustion models (PDF: joint scalar Probability Density Function and EDC: Eddy Dissipation Concept) are used in conjunction with various turbulence models to predict the lift-off height (H(sub PDF)/d = 7,H(sub EDC)/d = 8.5). Kalghatgi's classic phenomenological theory, which is based on scaling arguments, yields a reasonably accurate prediction (H(sub K)/d = 11.4) of the lift-off height for the present flame. The vitiated coflow admits the possibility of auto-ignition of mixed fluid, and the success of the present parabolic implementation of the PDF model in predicting a stable lifted flame is attributable to such ignition. The measurements indicate a thickened turbulent reaction zone at the flame base. Experimental results and numerical investigations support the plausibility of turbulent premixed flame propagation by small scale (on the order of the flame thickness) recirculation and mixing of hot products into reactants and subsequent rapid ignition of the mixture.
NASA Astrophysics Data System (ADS)
Dellacherie, Stéphane
2003-05-01
To describe the uranium gas expansion in the field of the Atomic Vapor Laser Isotopic Separation (AVLIS; SILVA in french) with a reasonable CPU time, we have to couple the resolution of the Boltzmann equation with the resolution of the Euler system. The resolution of the Euler system uses a kinetic scheme and the boundary condition at the kinetic - fluid interface — which defines the boundary between the Boltzmann area and the Euler area — is defined with the positive and negative half fluxes of the kinetic scheme. Moreover, in order to take into account the effect of the Knudsen layer through the resolution of the Euler system, we propose to use a Marshak condition to asymptoticaly match the Euler area with the uranium source. Numerical results show an excellent agreement between the results obtained with and without kinetic - fluid coupling.
NASA Astrophysics Data System (ADS)
Agus, M.; Mascia, M. L.; Fastame, M. C.; Melis, V.; Pilloni, M. C.; Penna, M. P.
2015-02-01
A body of literature shows the significant role of visual-spatial skills played in the improvement of mathematical skills in the primary school. The main goal of the current study was to investigate the impact of a combined visuo-spatial and mathematical training on the improvement of mathematical skills in 146 second graders of several schools located in Italy. Participants were presented single pencil-and-paper visuo-spatial or mathematical trainings, computerised version of the above mentioned treatments, as well as a combined version of computer-assisted and pencil-and-paper visuo-spatial and mathematical trainings, respectively. Experimental groups were presented with training for 3 months, once a week. All children were treated collectively both in computer-assisted or pencil-and-paper modalities. At pre and post-test all our participants were presented with a battery of objective tests assessing numerical and visuo-spatial abilities. Our results suggest the positive effect of different types of training for the empowerment of visuo-spatial and numerical abilities. Specifically, the combination of computerised and pencil-and-paper versions of visuo-spatial and mathematical trainings are more effective than the single execution of the software or of the pencil-and-paper treatment.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
NASA Astrophysics Data System (ADS)
Macario Galang, Jan Albert; Narod Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo
2015-04-01
The M 7.2 October 15, 2013 Bohol earthquake is the most destructive earthquake to hit the Philippines since 2012. The epicenter was located in Sagbayan municipality, central Bohol and was generated by a previously unmapped reverse fault called the "Inabanga Fault". Its name, taken after the barangay (village) where the fault is best exposed and was first seen. The earthquake resulted in 209 fatalities and over 57 billion USD worth of damages. The earthquake generated co-seismic landslides most of which were related to fault structures. Unlike rainfall induced landslides, the trigger for co-seismic landslides happen without warning. Preparedness against this type of landslide therefore, relies heavily on the identification of fracture-related unstable slopes. To mitigate the impacts of co-seismic landslide hazards, morpho-structural orientations or discontinuity sets were mapped in the field with the aid of a 2012 IFSAR Digital Terrain Model (DTM) with 5-meter pixel resolution and < 0.5 meter vertical accuracy. Coltop 3D software was then used to identify similar structures including measurement of their dip and dip directions. The chosen discontinuity sets were then keyed into Matterocking software to identify potential rock slide zones due to planar or wedged discontinuities. After identifying the structurally-controlled unstable slopes, the rock mass propagation extent of the possible rock slides was simulated using Conefall. The results were compared to a post-earthquake landslide inventory of 456 landslides. Out the total number of landslides identified from post-earthquake high-resolution imagery, 366 or 80% intersect the structural-controlled hazard areas of Bohol. The results show the potential of this method to identify co-seismic landslide hazard areas for disaster mitigation. Along with computer methods to simulate shallow landslides, and debris flow paths, located structurally-controlled unstable zones can be used to mark unsafe areas for settlement. The
High-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
Paraskevopoulos, Dimitrios; Unterberg, Andreas; Metzner, Roland; Dreyhaupt, Jens; Eggers, Georg; Wirtz, Christian Rainer
2010-04-01
This study aimed at comparing the accuracy of two commercial neuronavigation systems. Error assessment and quantification of clinical factors and surface registration, often resulting in decreased accuracy, were intended. Active (Stryker Navigation) and passive (VectorVision Sky, BrainLAB) neuronavigation systems were tested with an anthropomorphic phantom with a deformable layer, simulating skin and soft tissue. True coordinates measured by computer numerical control were compared with coordinates on image data and during navigation, to calculate software and system accuracy respectively. Comparison of image and navigation coordinates was used to evaluate navigation accuracy. Both systems achieved an overall accuracy of <1.5 mm. Stryker achieved better software accuracy, whereas BrainLAB better system and navigation accuracy. Factors with conspicuous influence (P<0.01) were imaging, instrument replacement, sterile cover drape and geometry of instruments. Precision data indicated by the systems did not reflect measured accuracy in general. Surface matching resulted in no improvement of accuracy, confirming former studies. Laser registration showed no differences compared to conventional pointers. Differences between the two systems were limited. Surface registration may improve inaccurate point-based registrations but does not in general affect overall accuracy. Accuracy feedback by the systems does not always match with true target accuracy and requires critical evaluation from the surgeon.
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?
Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L
2015-01-01
Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263
Improving Speaking Accuracy through Awareness
ERIC Educational Resources Information Center
Dormer, Jan Edwards
2013-01-01
Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…
Accuracy of the domain method for the material derivative approach to shape design sensitivities
NASA Technical Reports Server (NTRS)
Yang, R. J.; Botkin, M. E.
1987-01-01
Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Allison, D. O.
1974-01-01
Numerical results obtained from two computer programs recently developed with NASA support and now available for use by others are compared with some sample experimental data taken on a rectangular-wing configuration in the AEDC 16-Foot Transonic Tunnel at transonic and subsonic flow conditions. This data was used in an AEDC investigation as reference data to deduce the tunnel-wall interference effects for corresponding data taken in a smaller tunnel. The comparisons were originally intended to see how well a current state-of-the-art transonic flow calculation for a simple 3-D wing agreed with data which was felt by experimentalists to be relatively interference-free. As a result of the discrepancies between the experimental data and computational results at the quoted angle of attack, it was then deduced from an approximate stress analysis that the sting had deflected appreciably. Thus, the comparisons themselves are not so meaningful, since the calculations must be repeated at the proper angle of attack. Of more importance, however, is a demonstration of the utility of currently available computational tools in the analysis and correlation of transonic experimental data.
NASA Astrophysics Data System (ADS)
Baneshi, Mehdi; Gonome, Hiroki; Komiya, Atsuki; Maruyama, Shigenao
2012-05-01
A new approach in designing pigmented coatings considering both visual and thermal concerns was introduced by authors in previous works. The objective was to design a pigmented coating with dark appearance which can stay cool while exposed to sunlight. This behavior can be achieved by coating a typical black substrate with a pigmented coating with controlled size and concentration of particles and coating thickness. In present work, the spectral behaviour of polydisperse TiO2 pigmented coatings was studied. The radiative properties of polydisperse TiO2 powders were evaluated and the radiative transfer in the pigmented coating was modelled using the radiation element method by ray emission model (REM2). The effects of particles size distribution on spectral reflectivity, optimization parameter, and color coordinates were discussed. The results of numerical calculation were validated by experimental reflectivity measurements of several TiO2 pigmented coating samples made from two different TiO2 powders with different size distributions of particles. The results show that our model can reasonably predict the spectral reflectivity of TiO2 pigmented coating samples. Moreover, the results of optimized monodisperse TiO2 pigmented coatings were again validated.
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád; Konczer, József
2016-08-01
In this paper, we solved numerically the Quantum Spectral Curve (QSC) equations corresponding to some twist-2 single trace operators with even spin from the sl(2) sector of AdS 5 /CFT 4 correspondence. We describe all technical details of the numerical method which are necessary to implement it in C++ language.
How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy.
Weaver, Steven A; Ucar, Zennure; Bettinger, Pete; Merry, Krista
2015-01-01
The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an
How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy
Weaver, Steven A.; Ucar, Zennure; Bettinger, Pete; Merry, Krista
2015-01-01
The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an
Meteor orbit determination with improved accuracy
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria
2015-08-01
Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217
Numerical processing efficiency improved in experienced mental abacus children.
Wang, Yunqi; Geng, Fengji; Hu, Yuzheng; Du, Fenglei; Chen, Feiyan
2013-05-01
Experienced mental abacus (MA) users are able to perform mental arithmetic calculations with unusual speed and accuracy. However, it remains unclear whether their extraordinary gains in mental arithmetic ability are accompanied by an improvement in numerical processing efficiency. To address this question, the present study, using a numerical Stroop paradigm, examined the numerical processing efficiency of experienced MA children, MA beginners and their respective peers. The results showed that experienced MA children were less influenced than their peers by physical size information when intentionally processing numerical magnitude information, but they were more influenced than their peers by numerical magnitude information when intentionally processing physical size information. By contrast, MA beginners and peers showed no differences in the reciprocal influences between the two conflicting dimensions. These findings indicate that substantial gains in numerical processing efficiency could be achieved through long-term intensive MA training. Implications for numerical magnitude representations and for training students with mathematical learning disabilities are discussed.
Gregoire, C.; Joesten, P.K.; Lane, J.W.
2006-01-01
Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2011-01-01
Flow matching has been successfully achieved for an MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment helped perform a thermodynamic cycle analysis to properly match the flows from an inlet employing a MHD energy bypass system (consisting of an MHD generator and MHD accelerator) on a supersonic turbojet engine. Working with various operating conditions (such as the applied magnetic field, MHD generator length and flow conductivity), interfacing studies were conducted between the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis. This paper further describes the analysis of a supersonic turbojet engine with an MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to a range of 0 to 7.0 Mach with specific net thrust range of 740 N-s/kg (at ambient Mach = 3.25) to 70 N-s/kg (at ambient Mach = 7). These results were achieved with an applied magnetic field of 2.5 Tesla and conductivity levels in a range from 2 mhos/m (ambient Mach = 7) to 5.5 mhos/m (ambient Mach = 3.5) for an MHD generator length of 3 m.
Numerical comparison of Kalman filter algorithms - Orbit determination case study
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Thornton, C. L.
1977-01-01
Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.
GEOSPATIAL DATA ACCURACY ASSESSMENT
The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...
Accuracy in optical overlay metrology
NASA Astrophysics Data System (ADS)
Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark
2016-03-01
In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.
Highly Spinning Initial Data: Gauges and Accuracy
NASA Astrophysics Data System (ADS)
Zlochower, Yosef; Ruchlin, Ian; Healy, James; Lousto, Carlos
2016-03-01
We recently developed a code for solving the 3+1 system of constraints for highly-spinning black-hole binary initial data in the puncture formalism. Here we explore how different choices of gauge for the background metric improve both the efficiency and accuracy of the initial data solver and the subsequent fully nonlinear numerical evolutions of these data.
Orbit accuracy assessment for Seasat
NASA Technical Reports Server (NTRS)
Schutz, B. E.; Tapley, B. D.
1980-01-01
Laser range measurements are used to determine the orbit of Seasat during the period from July 28, 1978, to Aug. 14, 1978, and the influence of the gravity field, atmospheric drag, and solar radiation pressure on the orbit accuracy is investigated. It is noted that for the orbits of three-day duration, little distinction can be made between the influence of different atmospheric models. It is found that the special Seasat gravity field PGS-S3 is most consistent with the data for three-day orbits, but an unmodeled systematic effect in radiation pressure is noted. For orbits of 18-day duration, little distinction can be made between the results derived from the PGS gravity fields. It is also found that the geomagnetic field is an influential factor in the atmospheric modeling during this time period. Seasat altimeter measurements are used to determine the accuracy of the altimeter measurement time tag and to evaluate the orbital accuracy.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2010-01-01
Preliminary flow matching has been demonstrated for a MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment was used to perform a thermodynamic cycle analysis to properly match the flows from an inlet to a MHD generator and from the exit of a supersonic turbojet to a MHD accelerator. Working with various operating conditions such as the enthalpy extraction ratio and isentropic efficiency of the MHD generator and MHD accelerator, interfacing studies were conducted between the pre-ionizers, the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis and describes the NPSS analysis of a supersonic turbojet engine with a MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to an explored and desired range of 0 to 7.0 Mach.
Accuracy evaluation of 3D lidar data from small UAV
NASA Astrophysics Data System (ADS)
Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav
2015-10-01
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
Reticence, Accuracy and Efficacy
NASA Astrophysics Data System (ADS)
Oreskes, N.; Lewandowsky, S.
2015-12-01
James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.
NASA Astrophysics Data System (ADS)
Sarkadi, N.; Geresdi, I.; Thompson, G.
2016-11-01
In this study, results of bulk and bin microphysical schemes are compared in the case of idealized simulations of pre-frontal orographic clouds with enhanced embedded convection. The description graupel formation by intensive riming of snowflakes was improved compared to prior versions of each scheme. Two methods of graupel melting coincident with collisions with water drops were considered: (1) all simulated melting and collected water drops increase the amount of melted water on the surface of graupel particles with no shedding permitted; (2) also no shedding permitted due to melting, but the collision with the water drops can induce shedding from the surface of the graupel particles. The results of the numerical experiments show: (i) The bin schemes generate graupel particles more efficiently by riming than the bulk scheme does; the intense riming of snowflakes was the most dominant process for the graupel formation. (ii) The collision-induced shedding significantly affects the evolution of the size distribution of graupel particles and water drops below the melting level. (iii) The three microphysical schemes gave similar values for the domain integrated surface precipitation, but the patterns reveal meaningful differences. (iv) Sensitivity tests using the bulk scheme show that the depth of the melting layer is sensitive to the description of the terminal velocity of the melting snow. (v) Comparisons against Convair-580 flight measurements suggest that the bin schemes simulate well the evolution of the pristine ice particles and liquid drops, while some inaccuracy can occur in the description of snowflakes riming. (vi) The bin scheme with collision-induced shedding reproduced well the quantitative characteristics of the observed bright band.
NASA Technical Reports Server (NTRS)
Gomberg, Joan; Ellis, Michael
1994-01-01
We present results of a series of numerical experiments designed to test hypothetical mechanisms that derive deformation in the New Madrid seismic zone. Experiments are constrained by subtle topography and the distribution of seismicity in the region. We use a new boundary element algorithm that permits calcuation of the three-dimensional deformation field. Surface displacement fields are calculated for the New Madrid zone under both far-field (plate tectonics scale) and locally derived driving strains. Results demonstrate that surface displacement fields cannot distinguish between either a far-field simple or pure shear strain field or one that involves a deep shear zone beneath the upper crustal faults. Thus, neither geomorphic nor geodetic studies alone are expected to reveal the ultimate driving mechanism behind the present-day deformation. We have also tested hypotheses about strain accommodation within the New Madrid contractional step-over by including linking faults, two southwest dipping and one vertical, recently inferred from microearthquake data. Only those models with step-over faults are able to predict the observed topography. Surface displacement fields for long-term, relaxed deformation predict the distribution of uplift and subsidence in the contractional step-over remarkably well. Generation of these displacement fields appear to require slip on both the two northeast trending vertical faults and the two dipping faults in the step-over region, with very minor displacements occurring during the interseismic period when the northeast trending vertical faults are locked. These models suggest that the gently dippling central step-over fault is a reverse fault and that the steeper fault, extending to the southeast of the step-over, acts as a normal fault over the long term.
Accuracy of non-Newtonian Lattice Boltzmann simulations
NASA Astrophysics Data System (ADS)
Conrad, Daniel; Schneider, Andreas; Böhle, Martin
2015-11-01
This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.; Baker, D.
1994-01-01
This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.
Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation
Ismail, M. S.
2010-09-30
The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.
NASA Astrophysics Data System (ADS)
Dordevic, Mladen; Georgen, Jennifer
2016-03-01
Mantle plumes rising in the vicinity of mid-ocean ridges often generate anomalies in melt production and seafloor depth. This study investigates the dynamical interactions between a mantle plume and a ridge-ridge-ridge triple junction, using a parameter space approach and a suite of steady state, three-dimensional finite element numerical models. The top domain boundary is composed of three diverging plates, with each assigned half-spreading rates with respect to a fixed triple junction point. The bottom boundary is kept at a constant temperature of 1350°C except where a two-dimensional, Gaussian-shaped thermal anomaly simulating a plume is imposed. Models vary plume diameter, plume location, the viscosity contrast between plume and ambient mantle material, and the use of dehydration rheology in calculating viscosity. Importantly, the model results quantify how plume-related anomalies in mantle temperature pattern, seafloor depth, and crustal thickness depend on the specific set of parameters. To provide an example, one way of assessing the effect of conduit position is to calculate normalized area, defined to be the spatial dispersion of a given plume at specific depth (here selected to be 50 km) divided by the area occupied by the same plume when it is located under the triple junction. For one particular case modeled where the plume is centered in an intraplate position 100 km from the triple junction, normalized area is just 55%. Overall, these models provide a framework for better understanding plateau formation at triple junctions in the natural setting and a tool for constraining subsurface geodynamical processes and plume properties.
Stiffness of Carpentry Connections - Numerical Modelling vs. Experimental Test
NASA Astrophysics Data System (ADS)
Kekeliak, Miloš; Gocál, Jozef; Vičan, Josef
2015-12-01
In this paper, numerical modelling of the traditional carpentry connection with mortise and tenon is presented. Numerical modelling is focused on its stiffness and the results are compared to results of experimental tests carried out by (Feio, 2005) [6]. To consider soft behaviour of wood in carpentry connections, which are related to its surface roughness and geometrical accuracy of the contact surfaces, the characteristics of the normal contact stiffness, determined experimentally, are introduced in the numerical model. Parametric study by means of numerical modelling with regard to the sensitivity of connection stiffness to contact stiffness is presented. Based on the study results, in conclusion there are presented relevant differences between the results of numerical modelling and experimental tests (Feio, 2005) [6].
Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L
2016-04-14
Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID
Numerical simulations in combustion
NASA Technical Reports Server (NTRS)
Chung, T. J.
1989-01-01
This paper reviews numerical simulations in reacting flows in general and combustion phenomena in particular. It is shown that use of implicit schemes and/or adaptive mesh strategies can improve convergence, stability, and accuracy of the solution. Difficulties increase as turbulence and multidimensions are considered, particularly when finite-rate chemistry governs the given combustion problem. Particular attention is given to the areas of solid-propellant combustion dynamics, turbulent diffusion flames, and spray droplet vaporization.
New analytical algorithm for overlay accuracy
NASA Astrophysics Data System (ADS)
Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo
2012-03-01
The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore
Performance and accuracy benchmarks for a next generation geodynamo simulation
NASA Astrophysics Data System (ADS)
Matsui, H.
2015-12-01
A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.
Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.
2016-01-11
Accuracy assessment is an extensive effort that requires seasonal field personnel and equipment, data entry, analyses, and post processing—tasks that are costly and time consuming. The geospatial team at the UMESC has suggested a validation process for understanding the accuracy of the spatial datasets, which will be tested on at least some areas of the UMRS. Validation is not a true verification of map-class type in the field; however, it can provide the user of the map with useful information that is similar to a field AA
Amin, Amr; Moustafa, Hosna; Ahmed, Ebaa; El-Toukhy, Mohamed
2012-02-01
We compared pentavalent technetium-99m dimercaptosuccinic acid (Tc-99m (V) DMSA) brain single photon emission computed tomography (SPECT) and proton magnetic resonance spectroscopy ((1)H-MRS) for the detection of residual or recurrent gliomas after surgery and radiotherapy. A total of 24 glioma patients, previously operated upon and treated with radiotherapy, were studied. SPECT was acquired 2-3 h post-administration of 555-740 MBq of Tc-99m (V) DMSA. Lesion to normal (L/N) delayed uptake ratio was calculated as: mean counts of tumor ROI (L)/mean counts of normal mirror symmetric ROI (N). (1)H-MRS was performed using a 1.5-T scanner equipped with a spectroscopy package. SPECT and (1)H-MRS results were compared with pathology or follow-up neuroimaging studies. SPECT and (1)H-MRS showed concordant residue or recurrence in 9/24 (37.5%) patients. Both were true negative in 6/24 (25%) patients. SPECT and (1)H-MRS disagreed in 9 recurrences [7/9 (77.8%) and 2/9 (22.2%) were true positive by SPECT and (1)H-MRS, respectively]. Sensitivity of SPECT and (1)H-MRS in detecting recurrence was 88.8 and 61.1% with accuracies of 91.6 and 70.8%, respectively. A positive association between the delayed L/N ratio and tumor grade was found; the higher the grade, the higher is the L/N ratio (r = 0.62, P = 0.001). Tc-99m (V) DMSA brain SPECT is more accurate compared to (1)H-MRS for the detection of tumor residual tissues or recurrence in glioma patients with previous radiotherapy. It allows early and non-invasive differentiation of residual tumor or recurrence from irradiation necrosis.
Highly Parallel, High-Precision Numerical Integration
Bailey, David H.; Borwein, Jonathan M.
2005-04-22
This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.
NASA Astrophysics Data System (ADS)
Sweeney, Matthew R.; Valentine, Greg A.
2015-09-01
Most volcanoes experience some degree of phreatomagmatism during their lifetime. However, the current understanding of such processes remains limited relative to their magmatic counterparts. Maar-diatremes are a common volcano type that form primarily from phreatomagmatic explosions and are an ideal candidate to further our knowledge of deposits and processes resulting from explosive magma-water interaction due to their abundance as well as their variable levels of field exposure, which allows for detailed mapping and componentry. Two conceptual models of maar-diatreme volcanoes explain the growth and evolution of the crater (maar) and subsurface vent (diatreme) through repeated explosions caused by the interaction of magma and groundwater. One model predicts progressively deepening explosions as water is used up by phreatomagmatic explosions while the other allows for explosions at any level in the diatreme, provided adequate hydrologic conditions are present. In the former, deep-seated lithics in the diatreme are directly ejected and their presence in tephra rings is often taken as a proxy for the depth at which that particular explosion occurred. In the latter, deep-seated lithics are incrementally transported toward the surface via upward directed debris jets. Here we present a novel application of multiphase numerical modeling to assess the controls on length scales of debris jets and their role in upward transport of intra-diatreme material to determine the validity of the two models. The volume of gas generated during a phreatomagmatic explosion is a first order control on the vertical distance a debris jet travels. Unless extremely large amounts of magma and water are involved, it is unlikely that most explosions deeper than ∼ 250 m breach the surface. Other factors such as pressure and temperature have lesser effects on the length scales assuming they are within realistic ranges. Redistribution of material within a diatreme is primarily driven by
CHARMS: The Cryogenic, High-Accuracy Refraction Measuring System
NASA Technical Reports Server (NTRS)
Frey, Bradley; Leviton, Douglas
2004-01-01
The success of numerous upcoming NASA infrared (IR) missions will rely critically on accurate knowledge of the IR refractive indices of their constituent optical components at design operating temperatures. To satisfy the demand for such data, we have built a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS), which, for typical 1R materials. can measure the index of refraction accurate to (+ or -) 5 x 10sup -3 . This versatile, one-of-a-kind facility can also measure refractive index over a wide range of wavelengths, from 0.105 um in the far-ultraviolet to 6 um in the IR, and over a wide range of temperatures, from 10 K to 100 degrees C, all with comparable accuracies. We first summarize the technical challenges we faced and engineering solutions we developed during the construction of CHARMS. Next we present our "first light," index of refraction data for fused silica and compare our data to previously published results.
Drawing accuracy measured using polygons
NASA Astrophysics Data System (ADS)
Carson, Linda; Millard, Matthew; Quehl, Nadine; Danckert, James
2013-03-01
The study of drawing, for its own sake and as a probe into human visual perception, generally depends on ratings by human critics and self-reported expertise of the drawers. To complement those approaches, we have developed a geometric approach to analyzing drawing accuracy, one whose measures are objective, continuous and performance-based. Drawing geometry is represented by polygons formed by landmark points found in the drawing. Drawing accuracy is assessed by comparing the geometric properties of polygons in the drawn image to the equivalent polygon in a ground truth photo. There are four distinct properties of a polygon: its size, its position, its orientation and the proportionality of its shape. We can decompose error into four components and investigate how each contributes to drawing performance. We applied a polygon-based accuracy analysis to a pilot data set of representational drawings and found that an expert drawer outperformed a novice on every dimension of polygon error. The results of the pilot data analysis correspond well with the apparent quality of the drawings, suggesting that the landmark and polygon analysis is a method worthy of further study. Applying this geometric analysis to a within-subjects comparison of accuracy in the positive and negative space suggests there is a trade-off on dimensions of error. The performance-based analysis of geometric deformations will allow the study of drawing accuracy at different levels of organization, in a systematic and quantitative manner. We briefly describe the method and its potential applications to research in drawing education and visual perception.
NASA Astrophysics Data System (ADS)
Voronov, Nikolai; Dikinis, Alexandr
2015-04-01
Modern technologies of remote sensing (RS) open wide opportunities for monitoring and increasing the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. The RS data do not supersede ground-based observations, but they allow to solve new problems in the area of hydrological and meteorological monitoring and forecasting. In particular, the data of satellite, aviation or radar observations may be used for increasing of special-temporal discreteness of hydrometeorological observations. Besides, what seems very promising is conjunctive use of the data of remote sensing, ground-based observations and the "output" of hydrodynamical weather models, which allows to increase significantly the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. Modern technologies of monitoring and forecasting of hazardous of hazardous hydrometeorological phenomena on the basis of conjunctive use of the data of satellite, aviation and ground-based observations, as well as the output data of hydrodynamical weather models are considered. It is noted that an important and promising method of monitoring is bioindication - surveillance over response of the biota to external influence and behavior of animals that are able to be presentient of convulsions of nature. Implement of the described approaches allows to reduce significantly both the damage caused by certain hazardous hydrological and meteorological phenomena and the general level of hydrometeorological vulnerability of certain different-purpose objects and the RF economy as a whole.
Accuracy in prescriptions compounded by pharmacy students.
Shrewsbury, R P; Deloatch, K H
1998-01-01
Most compounded prescriptions are not analyzed to determine the accuracy of the employed instruments and procedures. The assumption is that the compounded prescription will be +/- 5% the labeled claim. Two classes of School of Pharmcacy students who received repeated instruction and supervision on proper compounding techniques and procedures were assessed to determine their accuracy of compounding a diphenhydramine hydrochloride prescription. After two attempts, only 62% to 68% of the students could compound the prescription within +/- 5% the labeled claim; but 84% to 96% could attain an accuracy of +/- 10%. The results suggest that an accuracy of +/- 10% labeled claim is the least variation a pharmacist can expect when extemporaneously compounding prescriptions.
Developing a Weighted Measure of Speech Sound Accuracy
ERIC Educational Resources Information Center
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2011-01-01
Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…
ERIC Educational Resources Information Center
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
Numerical experiments in homogeneous turbulence
NASA Technical Reports Server (NTRS)
Rogallo, R. S.
1981-01-01
The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.
Accuracy assessment of GPS satellite orbits
NASA Technical Reports Server (NTRS)
Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.
Numerical simulation of conservation laws
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; To, Wai-Ming
1992-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.
High accuracy flexural hinge development
NASA Astrophysics Data System (ADS)
Santos, I.; Ortiz de Zárate, I.; Migliorero, G.
2005-07-01
This document provides a synthesis of the technical results obtained in the frame of the HAFHA (High Accuracy Flexural Hinge Assembly) development performed by SENER (in charge of design, development, manufacturing and testing at component and mechanism levels) with EADS Astrium as subcontractor (in charge of doing an inventory of candidate applications among existing and emerging projects, establishing the requirements and perform system level testing) under ESA contract. The purpose of this project has been to develop a competitive technology for a flexural pivot, usuable in highly accurate and dynamic pointing/scanning mechanisms. Compared with other solutions (e.g. magnetic or ball bearing technologies) flexural hinges are the appropriate technology for guiding with accuracy a mobile payload over a limited angular ranges around one rotation axes.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
Measures of Diagnostic Accuracy: Basic Definitions
Šimundić, Ana-Maria
2009-01-01
Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions and characteristics of the measures of diagnostic accuracy.
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
NASA Astrophysics Data System (ADS)
Monk, Peter; Parrott, Kevin
2001-07-01
Edge-element methods have proved very effective for 3-D electromagnetic computations and are widely used on unstructured meshes. However, the accuracy of standard edge elements can be criticised because of their low order. This paper analyses discrete dispersion relations together with numerical propagation accuracy to determine the effect of tetrahedral shape on the phase accuracy of standard 3-D edge-element approximations in comparison to other methods. Scattering computations for the sphere obtained with edge elements are compared with results obtained with vertex elements, and a new formulation of the far-field integral approximations for use with edge elements is shown to give improved cross sections over conventional formulations.
NASA Astrophysics Data System (ADS)
Vlasenko, Vasiliy; Stashchuk, Nataliya; Inall, Mark; Hopkins, Jo
2015-04-01
The three-dimensional dynamics of baroclinic tides in the shelf-slope area of the Celtic Sea were investigated numerically and using observational data collected on the 376-th cruise of the R/V ``Discovery'' in June 2012. The time series recorded at a shelf-break mooring showed that semi-diurnal internal waves were accompanied by packets of internal solitary waves with maximum amplitudes up to 105 m, the largest internal waves ever recorded in the Celtic Sea. The observed baroclinic wave fields were replicated numerically using the Massachusetts Institute of Technology general circulation model. A fine-resolution grid with 115 m horizontal and 10 m vertical steps allowed the identification of two classes of short-scale internal waves. The first class was generated over headlands and resembles spiral-type internal waves that are typical for isolated underwater banks. The second class, generated within an area of isolated canyons, revealed properties of quasi-plane internal wave packets. The observed in-situ intensification of tidal bottom currents at the shelf break mooring is explained in terms of a tidal beam that was formed over supercritical bottom topography.
High accuracy OMEGA timekeeping
NASA Technical Reports Server (NTRS)
Imbier, E. A.
1982-01-01
The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.
Municipal water consumption forecast accuracy
NASA Astrophysics Data System (ADS)
Fullerton, Thomas M.; Molina, Angel L.
2010-06-01
Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children.
NASA Astrophysics Data System (ADS)
Bourlier, C.; Berginc, G.
2004-07-01
This second part presents illustrative examples of the model developed in the companion paper, which is based on the first- and second-order optics approximation. The surface is assumed to be Gaussian and the correlation height is chosen as anisotropic Gaussian. The incoherent scattering coefficient is computed for a height rms range from 0.5lgr to 1lgr (where lgr is the electromagnetic wavelength), for a slope rms range from 0.5 to 1 and for an incidence angle range from 0 to 70°. In addition, simulations are presented for an anisotropic Gaussian surface and when the receiver is not located in the plane of incidence. For a metallic and dielectric isotropic Gaussian surfaces, the cross- and co-polarizations are also compared with a numerical approach obtained from the forward-backward method with a novel spectral acceleration algorithm developed by Torrungrueng and Johnson (2001, JOSA A 18).
Numerical discrimination is mediated by neural coding variation.
Prather, Richard W
2014-12-01
One foundation of numerical cognition is that discrimination accuracy depends on the proportional difference between compared values, closely following the Weber-Fechner discrimination law. Performance in non-symbolic numerical discrimination is used to calculate individual Weber fraction, a measure of relative acuity of the approximate number system (ANS). Individual Weber fraction is linked to symbolic arithmetic skills and long-term educational and economic outcomes. The present findings suggest that numerical discrimination performance depends on both the proportional difference and absolute value, deviating from the Weber-Fechner law. The effect of absolute value is predicted via computational model based on the neural correlates of numerical perception. Specifically, that the neural coding "noise" varies across corresponding numerosities. A computational model using firing rate variation based on neural data demonstrates a significant interaction between ratio difference and absolute value in predicting numerical discriminability. We find that both behavioral and computational data show an interaction between ratio difference and absolute value on numerical discrimination accuracy. These results further suggest a reexamination of the mechanisms involved in non-symbolic numerical discrimination, how researchers may measure individual performance, and what outcomes performance may predict.
High accuracy broadband infrared spectropolarimetry
NASA Astrophysics Data System (ADS)
Krishnaswamy, Venkataramanan
Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.
von Holst, Hans; Li, Xiaogai
2013-07-01
Although the consequences of traumatic brain injury (TBI) and its treatment have been improved, there is still a substantial lack of understanding the mechanisms. Numerical simulation of the impact can throw further lights on site and mechanism of action. A finite element model of the human head and brain tissue was used to simulate TBI. The consequences of gradually increased kinetic energy transfer was analyzed by evaluating the impact intracranial pressure (ICP), strain level, and their potential influences on binding forces in folded protein structures. The gradually increased kinetic energy was found to have the potential to break apart bonds of Van der Waals in all impacts and hydrogen bonds at simulated impacts from 6 m/s and higher, thereby superseding the energy in folded protein structures. Further, impacts below 6 m/s showed none or very slight increase in impact ICP and strain levels, whereas impacts of 6 m/s or higher showed a gradual increase of the impact ICP and strain levels reaching over 1000 KPa and over 30%, respectively. The present simulation study shows that the free kinetic energy transfer, impact ICP, and strain levels all have the potential to initiate cytotoxic brain tissue edema by unfolding protein structures. The definition of mild, moderate, and severe TBI should thus be looked upon as the same condition and separated only by a gradual severity of impact.
Jacobsen, S; Birkelund, Y
2010-01-01
Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40-50%.
NASA Technical Reports Server (NTRS)
Cabra, R.; Chen, J. Y.; Dibble, R. W.; Hamano, Y.; Karpetis, A. N.; Barlow, R. S.
2002-01-01
An experimental and numerical investigation is presented of a H2/N2 turbulent jet flame burner that has a novel vitiated coflow. The vitiated coflow emulates the recirculation region of most combustors, such as gas turbines or furnaces. Additionally, since the vitiated gases are coflowing, the burner allows for exploration of recirculation chemistry without the corresponding fluid mechanics of recirculation. Thus the vitiated coflow burner design facilitates the development of chemical kinetic combustion models without the added complexity of recirculation fluid mechanics. Scalar measurements are reported for a turbulent jet flame of H2/N2 in a coflow of combustion products from a lean ((empty set) = 0.25) H2/Air flame. The combination of laser-induced fluorescence, Rayleigh scattering, and Raman scattering is used to obtain simultaneous measurements of the temperature, major species, as well as OH and NO. Laminar flame calculation with equal diffusivity do agree when the premixing and preheating that occurs prior to flame stabilization is accounted for in the boundary conditions. Also presented is an exploratory pdf model that predicts the flame's axial profiles fairly well, but does not accurately predict the lift-off height.
Parametric Characterization of SGP4 Theory and TLE Positional Accuracy
NASA Astrophysics Data System (ADS)
Oltrogge, D.; Ramrath, J.
2014-09-01
Two-Line Elements, or TLEs, contain mean element state vectors compatible with General Perturbations (GP) singly-averaged semi-analytic orbit theory. This theory, embodied in the SGP4 orbit propagator, provides sufficient accuracy for some (but perhaps not all) orbit operations and SSA tasks. For more demanding tasks, higher accuracy orbit and force model approaches (i.e. Special Perturbations numerical integration or SP) may be required. In recent times, the suitability of TLEs or GP theory for any SSA analysis has been increasingly questioned. Meanwhile, SP is touted as being of high quality and well-suited for most, if not all, SSA applications. Yet the lack of truth or well-known reference orbits that haven't already been adopted for radar and optical sensor network calibration has typically prevented a truly unbiased assessment of such assertions. To gain better insight into the practical limits of applicability for TLEs, SGP4 and the underlying GP theory, the native SGP4 accuracy is parametrically examined for the statistically-significant range of RSO orbit inclinations experienced as a function of all orbit altitudes from LEO through GEO disposal altitude. For each orbit altitude, reference or truth orbits were generated using full force modeling, time-varying space weather, and AGIs HPOP numerical integration orbit propagator. Then, TLEs were optimally fit to these truth orbits. The resulting TLEs were then propagated and positionally differenced with the truth orbits to determine how well the GP theory was able to fit the truth orbits. Resultant statistics characterizing these empirically-derived accuracies are provided. This TLE fit process of truth orbits was intentionally designed to be similar to the JSpOC process operationally used to generate Enhanced GP TLEs for debris objects. This allows us to draw additional conclusions of the expected accuracies of EGP TLEs. In the real world, Orbit Determination (OD) programs aren't provided with dense optical
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Matsui, Toshihisa; Shi, Jainn J.; Tao, Wei-Kuo; Khain, Alexander P.; Hao, Arthur; Cifelli, Robert; Heymsfield, Andrew; Tokay, Ali
2012-01-01
Two distinct snowfall events are observed over the region near the Great Lakes during 19-23 January 2007 under the intensive measurement campaign of the Canadian CloudSat/CALIPSO validation project (C3VP). These events are numerically investigated using the Weather Research and Forecasting model coupled with a spectral bin microphysics (WRF-SBM) scheme that allows a smooth calculation of riming process by predicting the rimed mass fraction on snow aggregates. The fundamental structures of the observed two snowfall systems are distinctly characterized by a localized intense lake-effect snowstorm in one case and a widely distributed moderate snowfall by the synoptic-scale system in another case. Furthermore, the observed microphysical structures are distinguished by differences in bulk density of solid-phase particles, which are probably linked to the presence or absence of supercooled droplets. The WRF-SBM coupled with Goddard Satellite Data Simulator Unit (G-SDSU) has successfully simulated these distinctive structures in the three-dimensional weather prediction run with a horizontal resolution of 1 km. In particular, riming on snow aggregates by supercooled droplets is considered to be of importance in reproducing the specialized microphysical structures in the case studies. Additional sensitivity tests for the lake-effect snowstorm case are conducted utilizing different planetary boundary layer (PBL) models or the same SBM but without the riming process. The PBL process has a large impact on determining the cloud microphysical structure of the lake-effect snowstorm as well as the surface precipitation pattern, whereas the riming process has little influence on the surface precipitation because of the small height of the system.
Borot, Sophie; Franc, Sylvia; Cristante, Justine; Penfornis, Alfred; Benhamou, Pierre-Yves; Guerci, Bruno; Hanaire, Hélène; Renard, Eric; Reznik, Yves; Simon, Chantal; Charpentier, Guillaume
2014-11-01
The JewelPUMP™ (JP) is a new patch pump based on a microelectromechanical system that operates without any plunger. The study aimed to evaluate the infusion accuracy of the JP in vitro and in vivo. For the in vitro studies, commercially available pumps meeting the ISO standard were compared to the JP: the MiniMed® Paradigm® 712 (MP), Accu-Chek® Combo (AC), OmniPod® (OP), Animas® Vibe™ (AN). Pump accuracy was measured over 24 hours using a continuous microweighing method, at 0.1 and 1 IU/h basal rates. The occlusion alarm threshold was measured after a catheter occlusion. The JP, filled with physiological serum, was then tested in 13 patients with type 1 diabetes simultaneously with their own pump for 2 days. The weight difference was used to calculate the infused insulin volume. The JP showed reduced absolute median error rate in vitro over a 15-minute observation window compared to other pumps (1 IU/h): ±1.02% (JP) vs ±1.60% (AN), ±1.66% (AC), ±2.22% (MP), and ±4.63% (OP), P < .0001. But there was no difference over 24 hours. At 0.5 IU/h, the JP was able to detect an occlusion earlier than other pumps: 21 (19; 25) minutes vs 90 (85; 95), 58 (42; 74), and 143 (132; 218) minutes (AN, AC, MP), P < .05 vs AN and MP. In patients, the 24-hour flow error was not significantly different between the JP and usual pumps (-2.2 ± 5.6% vs -0.37 ± 4.0%, P = .25). The JP was found to be easier to wear than conventional pumps. The JP is more precise over a short time period, more sensitive to catheter occlusion, well accepted by patients, and consequently, of potential interest for a closed-loop insulin delivery system.
Borot, Sophie; Franc, Sylvia; Cristante, Justine; Penfornis, Alfred; Benhamou, Pierre-Yves; Guerci, Bruno; Hanaire, Hélène; Renard, Eric; Reznik, Yves; Simon, Chantal; Charpentier, Guillaume
2014-11-01
The JewelPUMP™ (JP) is a new patch pump based on a microelectromechanical system that operates without any plunger. The study aimed to evaluate the infusion accuracy of the JP in vitro and in vivo. For the in vitro studies, commercially available pumps meeting the ISO standard were compared to the JP: the MiniMed® Paradigm® 712 (MP), Accu-Chek® Combo (AC), OmniPod® (OP), Animas® Vibe™ (AN). Pump accuracy was measured over 24 hours using a continuous microweighing method, at 0.1 and 1 IU/h basal rates. The occlusion alarm threshold was measured after a catheter occlusion. The JP, filled with physiological serum, was then tested in 13 patients with type 1 diabetes simultaneously with their own pump for 2 days. The weight difference was used to calculate the infused insulin volume. The JP showed reduced absolute median error rate in vitro over a 15-minute observation window compared to other pumps (1 IU/h): ±1.02% (JP) vs ±1.60% (AN), ±1.66% (AC), ±2.22% (MP), and ±4.63% (OP), P < .0001. But there was no difference over 24 hours. At 0.5 IU/h, the JP was able to detect an occlusion earlier than other pumps: 21 (19; 25) minutes vs 90 (85; 95), 58 (42; 74), and 143 (132; 218) minutes (AN, AC, MP), P < .05 vs AN and MP. In patients, the 24-hour flow error was not significantly different between the JP and usual pumps (-2.2 ± 5.6% vs -0.37 ± 4.0%, P = .25). The JP was found to be easier to wear than conventional pumps. The JP is more precise over a short time period, more sensitive to catheter occlusion, well accepted by patients, and consequently, of potential interest for a closed-loop insulin delivery system. PMID:25079676
Accuracy study of the IDO scheme by Fourier analysis
NASA Astrophysics Data System (ADS)
Imai, Yohsuke; Aoki, Takayuki
2006-09-01
The numerical accuracy of the Interpolated Differential Operator (IDO) scheme is studied with Fourier analysis for the solutions of Partial Differential Equations (PDEs): advection, diffusion, and Poisson equations. The IDO scheme solves governing equations not only for physical variable but also for first-order spatial derivative. Spatial discretizations are based on Hermite interpolation functions with both of them. In the Fourier analysis for the IDO scheme, the Fourier coefficients of the physical variable and the first-order derivative are coupled by the equations derived from the governing equations. The analysis shows the IDO scheme resolves all the wavenumbers with higher accuracy than the fourth-order Finite Difference (FD) and Compact Difference (CD) schemes for advection equation. In particular, for high wavenumbers, the accuracy is superior to that of the sixth-order Combined Compact Difference (CCD) scheme. The diffusion and Poisson equations are also more accurately solved in comparison with the FD and CD schemes. These results show that the IDO scheme guarantees highly resolved solutions for all the terms of fluid flow equations.
Measuring the accuracy of agro-environmental indicators.
Makowski, David; Tichit, Muriel; Guichard, Laurence; Van Keulen, Herman; Beaudoin, Nicolas
2009-05-01
Numerous agro-environmental indicators have been developed by agronomists and ecologists during the last 20 years to assess the environmental impact of farmers' practices, and to monitor effects of agro-environmental policies. The objectives of this paper were (i) to measure the accuracy of a wide range of agro-environmental indicators from experimental data and (ii) to discuss the value of different information typically used by these indicators, i.e. information on farmers' practices, and on plant and soil characteristics. Four series of indicators were considered in this paper: indicators of habitat quality for grassland bird species, indicators of risk of disease in oilseed rape crops, indicators of risk of pollution by nitrogen fertilizer, and indicators of weed infestation. Several datasets were used to measure their accuracy in cultivated plots and in grasslands. The sensitivity, specificity, and probability of correctly ranking plots were estimated for each indicator. Our results showed that the indicators had widely varying levels of accuracy. Some show very poor performance and had no discriminatory ability. Other indicators were informative and performed better than random decisions. Among the tested indicators, the best ones were those using information on plant characteristics such as grass height, fraction of diseased flowers, or crop yield. The statistical method applied in this paper could support researchers, farm advisers, and decision makers in comparing various indicators. PMID:19128870
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
Numerical solutions of telegraph equations with the Dirichlet boundary condition
NASA Astrophysics Data System (ADS)
Ashyralyev, Allaberen; Turkcan, Kadriye Tuba; Koksal, Mehmet Emir
2016-08-01
In this study, the Cauchy problem for telegraph equations in a Hilbert space is considered. Stability estimates for the solution of this problem are presented. The third order of accuracy difference scheme is constructed for approximate solutions of the problem. Stability estimates for the solution of this difference scheme are established. As a test problem to support theoretical results, one-dimensional telegraph equation with the Dirichlet boundary condition is considered. Numerical solutions of this equation are obtained by first, second and third order of accuracy difference schemes.
Numerical modelling errors in electrical impedance tomography.
Dehghani, Hamid; Soleimani, Manuchehr
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique that aims to reconstruct images of internal impedance values of a volume of interest, based on measurements taken on the external boundary. Since most reconstruction algorithms rely on model-based approximations, it is important to ensure numerical accuracy for the model being used. This work demonstrates and highlights the importance of accurate modelling in terms of model discretization (meshing) and shows that although the predicted boundary data from a forward model may be within an accepted error, the calculated internal field, which is often used for image reconstruction, may contain errors, based on the mesh quality that will result in image artefacts.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
Numerical simulations in the development of propellant management devices
NASA Astrophysics Data System (ADS)
Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael
Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Accuracy and precision of manual baseline determination.
Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B
2004-12-01
Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.
High accuracy time transfer synchronization
NASA Technical Reports Server (NTRS)
Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.
1995-01-01
In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.
High accuracy time transfer synchronization
NASA Astrophysics Data System (ADS)
Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.
1995-05-01
In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.
High-accuracy EUV reflectometer
NASA Astrophysics Data System (ADS)
Hinze, U.; Fokoua, M.; Chichkov, B.
2007-03-01
Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.
Accuracy analysis of high-order lattice Boltzmann models for rarefied gas flows
NASA Astrophysics Data System (ADS)
Meng, Jianping; Zhang, Yonghao
2011-02-01
In this work, we have theoretically analyzed and numerically evaluated the accuracy of high-order lattice Boltzmann (LB) models for capturing non-equilibrium effects in rarefied gas flows. In the incompressible limit, the LB equation is shown to be able to reduce to the linearized Bhatnagar-Gross-Krook (BGK) equation. Therefore, when the same Gauss-Hermite quadrature is used, LB method closely resembles the discrete velocity method (DVM). In addition, the order of Hermite expansion for the equilibrium distribution function is found not to be directly correlated with the approximation order in terms of the Knudsen number to the BGK equation for incompressible flows. Meanwhile, we have numerically evaluated the LB models for a standing-shear-wave problem, which is designed specifically for assessing model accuracy by excluding the influence of gas molecule/surface interactions at wall boundaries. The numerical simulation results confirm that the high-order terms in the discrete equilibrium distribution function play a negligible role in capturing non-equilibrium effect for low-speed flows. By contrast, appropriate Gauss-Hermite quadrature has the most significant effect on whether LB models can describe the essential flow physics of rarefied gas accurately. Our simulation results, where the effect of wall/gas interactions is excluded, can lead to conclusion on the LB modeling capability that the models with higher-order quadratures provide more accurate results. For the same order Gauss-Hermite quadrature, the exact abscissae will also modestly influence numerical accuracy. Using the same Gauss-Hermite quadrature, the numerical results of both LB and DVM methods are in excellent agreement for flows across a broad range of the Knudsen numbers, which confirms that the LB simulation is similar to the DVM process. Therefore, LB method can offer flexible models suitable for simulating continuum flows at the Navier-Stokes level and rarefied gas flows at the linearized
Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Leimkuhler, Benedict; Shang, Xiaocheng
2016-11-01
We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Empathic Embarrassment Accuracy in Autism Spectrum Disorder.
Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G
2015-06-01
Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043
Optimal design of robot accuracy compensators
Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)
1993-12-01
The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.
NASA Technical Reports Server (NTRS)
Li, Yi-Wei; Elishakoff, Isaac; Starnes, James H., Jr.; Bushnell, David
1998-01-01
This study is an extension of a previous investigation of the combined effect of axisymmetric thickness variation and axisymmetric initial geometric imperfection on buckling of isotropic shells under uniform axial compression. Here the anisotropic cylindrical shells are investigated by means of Koiter's energy criterion. An asymptotic formula is derived which can be used to determine the critical buckling load for composite shells with combined initial geometric imperfection and thickness variation. Results are compared with those obtained by the software packages BOSOR4 and PANDA2.
Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Thornton, C. L.
1976-01-01
Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.
NASA Astrophysics Data System (ADS)
Beniaiche, Ahmed; Ghenaiet, Adel; Facchini, Bruno
2016-05-01
The aero-thermal behavior of the flow field inside 30:1 scaled model reproducing an innovative smooth trailing edge of shaped wedge discharge duct with one row of enlarged pedestals have been investigated in order to determine the effect of rotation, inlet velocity and blowing conditions effects, for Re = 20,000 and 40,000 and Ro = 0-0.23. Two configurations are presented: with and without open tip configurations. Thermo-chromic liquid crystals technique is used to ensure a local measurement of the heat transfer coefficient on the blade suction side under stationary and rotation conditions. Results are reported in terms of detailed 2D HTC maps on the suction side surface as well as the averaged Nusselt number inside the pedestal ducts. Two correlations are proposed, for both closed and open tip configurations, based on the Re, Pr, Ro and a new non-dimensional parameter based on the position along the radial distance, to assess a reliable estimation of the averaged Nusselt number at the inter-pedestal region. A good agreement is found between prediction and experimental data with about ±10 to ±12 % of uncertainty, for the simple form correlation, and about ±16 % using a complex form. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.
NASA Astrophysics Data System (ADS)
Ito, Masakazu; Mito, Masaki; Deguchi, Hiroyuki; Takeda, Kazuyoshi
1994-03-01
The measurements of magnetic heat capacity and susceptibility of one-dimensional S=1 antiferromagnet (CH3)4NNi(NO2)3 (TMNIN) have been carried out in order to make comparison with the theoretical results of a quantum Monte Carlo method for the Haldane system. The results for the heat capacity, which show a broad maximum around 10 K, are well reproduced by the theory with the interaction J/k B=-12.0±1.0 K in the temperature range T>0.2\\mid J\\mid S(S+1)/k_B. The low temperature heat capacity exhibits an exponential decay with gap energy Δ/k B=5.3±0.2 K, which gives {\\mitΔ}=0.44\\mid J\\mid , in contrast to the linear dependence on temperature as in the case for half integer spin. The residual magnetic entropy below 0.7 K is estimated to be 0.07% of Nk B ln 3, which denies the possibility of three-dimensional ordering of the spin system at lower temperatures. The observed susceptibility also agrees with the theory with J/k B=-10.9 K and g=2.02 in the whole temperature region, when we take the effect from the finite length of the chains into consideration.
Efficient algorithms for numerical simulation of the motion of earth satellites
NASA Astrophysics Data System (ADS)
Bordovitsyna, T. V.; Bykova, L. E.; Kardash, A. V.; Fedyaev, Yu. A.; Sharkovskii, N. A.
1992-08-01
We briefly present our results obtained during the development and an investigation of the efficacy of algorithms for numerical prediction of the motion of earth satellites (ESs) using computers of different power. High accuracy and efficiency in predicting ES motion are achieved by using higher-order numerical methods, transformations that regularize and stabilize the equations of motion, and a high-precision model of the forces acting on an ES. This approach enables us to construct efficient algorithms of the required accuracy, both for universal computers with a large RAM and for personal computers with very limited capacity.
Numerical simulation of wall-bounded turbulent shear flows
NASA Technical Reports Server (NTRS)
Moin, P.
1982-01-01
Developments in three dimensional, time dependent numerical simulation of turbulent flows bounded by a wall are reviewed. Both direct and large eddy simulation techniques are considered within the same computational framework. The computational spatial grid requirements as dictated by the known structure of turbulent boundary layers are presented. The numerical methods currently in use are reviewed and some of the features of these algorithms, including spatial differencing and accuracy, time advancement, and data management are discussed. A selection of the results of the recent calculations of turbulent channel flow, including the effects of system rotation and transpiration on the flow are included.
Sheets, Rodney A.; Dumouchelle, Denise H.; Feinstein, Daniel T.
2005-01-01
Agreements between United States governors and Canadian territorial premiers establish water-management principles and a framework for protecting Great Lakes waters, including ground water, from diversion and consumptive uses. The issue of ground-water diversions out of the Great Lakes Basin by large-scale pumping near the divides has been raised. Two scenario models, in which regional ground-water flow models represent major aquifers in the Great Lakes region, were used to assess the effect of pumping near ground-water divides. The regional carbonate aquifer model was a generalized model representing northwestern Ohio and northeastern Indiana; the regional sandstone aquifer model used an existing calibrated ground-water flow model for southeastern Wisconsin. Various well locations and pumping rates were examined. Although the two models have different frameworks and boundary conditions, results of the models were similar. There was significant diversion of ground water across ground-water divides due to pumping within 10 miles of the divides. In the regional carbonate aquifer model, the percentage of pumped water crossing the divide ranges from about 20 percent for a well 10 miles from the divide to about 50 percent for a well adjacent to the divide. In the regional sandstone aquifer model, the percentages range from about 30 percent for a well 10 miles from the divide to about 50 percent for a well adjacent to the divide; when pumping on the west side of the divide, within 5 mi of the predevelopment divide, results in at least 10 percent of the water being diverted from the east side of the divide. Two additional scenario models were done to examine the effects of pumping near rivers. Transient models were used to simulate a rapid stage rise in a river during pumping at a well in carbonate and glacial aquifers near the river. Results of water-budget analyses indicate that induced infiltration, captured streamflow, and underflow were important for both glacial and
NASA Technical Reports Server (NTRS)
Nathenson, M.; Baganoff, D.; Yen, S. M.
1974-01-01
Data obtained from a numerical solution of the Boltzmann equation for shock-wave structure are used to test the accuracy of accepted approximate expressions for the two moments of the collision integral Delta (Q) for general intermolecular potentials in systems with a large translational nonequilibrium. The accuracy of the numerical scheme is established by comparison of the numerical results with exact expressions in the case of Maxwell molecules. They are then used in the case of hard-sphere molecules, which are the furthest-removed inverse power potential from the Maxwell molecule; and the accuracy of the approximate expressions in this domain is gauged. A number of approximate solutions are judged in this manner, and the general advantages of the numerical approach in itself are considered.
Entropy Splitting and Numerical Dissipation
NASA Technical Reports Server (NTRS)
Yee, H. C.; Vinokur, M.; Djomehri, M. J.
1999-01-01
A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock
NASA Astrophysics Data System (ADS)
Matang, Rex A. S.; Owens, Kay
2014-09-01
The Government of Papua New Guinea undertook a significant step in developing curriculum reform policy that promoted the use of Indigenous knowledge systems in teaching formal school subjects in any of the country's 800-plus Indigenous languages. The implementation of the Elementary Cultural Mathematics Syllabus is in line with the above curriculum emphasis. Given the aims of the reform, the research reported here investigated the influence of children's own mother tongue (Tok Ples) and traditional counting systems on their development of early number knowledge formally taught in schools. The study involved 272 school children from 22 elementary schools in four provinces. Each child participated in a task-based assessment interview focusing on eight task groups relating to early number knowledge. The results obtained indicate that, on average, children learning their traditional counting systems in their own language spent shorter time and made fewer mistakes in solving each task compared to those taught without Tok Ples (using English and/or the lingua franca, Tok Pisin). Possible reasons accounting for these differences are also discussed.
NASA Astrophysics Data System (ADS)
Bordogna, Clelia María; Albano, Ezequiel V.
2007-02-01
The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.
Increasing Accuracy in Environmental Measurements
NASA Astrophysics Data System (ADS)
Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst
2016-04-01
Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.
Landsat classification accuracy assessment procedures
Mead, R. R.; Szajgin, John
1982-01-01
A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.
Numerical Computation of the Tau Approximation for the Delayed Burgers Equation
NASA Astrophysics Data System (ADS)
Khaksar, Haghani F.; Karimi, Vanani S.; Sedighi, Hafshejani J.
2013-02-01
We investigate an efficient extension of the operational Tau method for solving the delayed Burgers equation(DBE) arising in physical problems. This extension gives a useful numerical algorithm for the DBE including linear and nonlinear terms. The orthogonality of the Laguerre polynomials as the basis function is the main characteristic behind the method to decrease the volume of computations and runtime of the method. Numerical results are also presented for some experiments to demonstrate the usefulness and accuracy of the proposed algorithm.
Numerical Integration: One Step at a Time
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…
Design and analysis of a high-accuracy flexure hinge.
Liu, Min; Zhang, Xianmin; Fatikow, Sergej
2016-05-01
This paper designs and analyzes a new kind of flexure hinge obtained by using a topology optimization approach, namely, a quasi-V-shaped flexure hinge (QVFH). Flexure hinges are formed by three segments: the left and right segments with convex shapes and the middle segment with straight line. According to the results of topology optimization, the curve equations of profiles of the flexure hinges are developed by numerical fitting. The in-plane dimensionless compliance equations of the flexure hinges are derived based on Castigliano's second theorem. The accuracy of rotation, which is denoted by the compliance of the center of rotation that deviates from the midpoint, is derived. The equations for evaluating the maximum stresses are also provided. These dimensionless equations are verified by finite element analysis and experimentation. The analytical results are within 8% uncertainty compared to the finite element analysis results and within 9% uncertainty compared to the experimental measurement data. Compared with the filleted V-shaped flexure hinge, the QVFH has a higher accuracy of rotation and better ability of preserving the center of rotation position but smaller compliance. PMID:27250469
The Accuracy of Shock Capturing in Two Spatial Dimensions
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Casper, Jay H.
1997-01-01
An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.
Towards Experimental Accuracy from the First Principles
NASA Astrophysics Data System (ADS)
Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.
2013-06-01
Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).
Test Expectancy Affects Metacomprehension Accuracy
ERIC Educational Resources Information Center
Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2011-01-01
Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…
Accuracy of an estuarine hydrodynamic model using smooth elements
Walters, Roy A.; Cheng, Ralph T.
1980-01-01
A finite element model which uses triangular, isoparametric elements with quadratic basis functions for the two velocity components and linear basis functions for water surface elevation is used in the computation of shallow water wave motions. Specifically addressed are two common uncertainties in this class of two-dimensional hydrodynamic models: the treatment of the boundary conditions at open boundaries and the treatment of lateral boundary conditions. The accuracy of the models is tested with a set of numerical experiments in rectangular and curvilinear channels with constant and variable depth. The results indicate that errors in velocity at the open boundary can be significant when boundary conditions for water surface elevation are specified. Methods are suggested for minimizing these errors. The results also show that continuity is better maintained within the spatial domain of interest when ‘smooth’ curve-sided elements are used at shoreline boundaries than when piecewise linear boundaries are used. Finally, a method for network development is described which is based upon a continuity criterion to gauge accuracy. A finite element network for San Francisco Bay, California, is used as an example.
A numerical study of nonstationary plasma and projectile motion in a rail gun
NASA Astrophysics Data System (ADS)
Zvezdin, A. M.; Kovalev, V. L.
1992-10-01
Changes in plasma parameters and projectile velocity and acceleration in a rail gun during the launch are investigated numerically. The method involves determining the velocity and magnetic induction using a difference scheme and an explicit nonlinear method with flow correction for calculating plasma density. The accuracy of the method proposed here is demonstrated by comparing the results with data in the literature.
Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.
2016-01-11
Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.
Accuracy investigation of phthalate metabolite standards.
Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude
2012-05-01
Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.
NASA Technical Reports Server (NTRS)
Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.
1973-01-01
Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.
Decreased interoceptive accuracy following social exclusion.
Durlik, Caroline; Tsakiris, Manos
2015-04-01
The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592
Assessing the Accuracy of the Precise Point Positioning Technique
NASA Astrophysics Data System (ADS)
Bisnath, S. B.; Collins, P.; Seepersad, G.
2012-12-01
The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper
Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition.
Liu, Shuang; Zhang, Di; Xu, Minpeng; Qi, Hongzhi; He, Feng; Zhao, Xin; Zhou, Peng; Zhang, Lixin; Ming, Dong
2015-04-01
There are numerous studies measuring the brain emotional status by analyzing EEGs under the emotional stimuli that have occurred. However, they often randomly divide the homologous samples into training and testing groups, known as randomly dividing homologous samples (RDHS), despite considering the impact of the non-emotional information among them, which would inflate the recognition accuracy. This work proposed a modified method, the integrating homologous samples (IHS), where the homologous samples were either used to build a classifier, or to be tested. The results showed that the classification accuracy was much lower for the IHS than for the RDHS. Furthermore, a positive correlation was found between the accuracy and the overlapping rate of the homologous samples. These findings implied that the overinflated accuracy did exist in those previous studies where the RDHS method was employed for emotion recognition. Moreover, this study performed a feature selection for the IHS condition based on the support vector machine-recursive feature elimination, after which the average accuracies were greatly improved to 85.71% and 77.18% in the picture-induced and video-induced tasks, respectively.
Asymptotic accuracy of two-class discrimination
Ho, T.K.; Baird, H.S.
1994-12-31
Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.
On the accuracy of close stellar approaches determination
NASA Astrophysics Data System (ADS)
Dybczyński, Piotr A.; Berski, Filip
2015-05-01
The aim of this paper is to demonstrate the accuracy of our knowledge of close stellar passage distances in the pre-Gaia era. We used the most precise astrometric and kinematic data available at the moment and prepared a list of 40 stars nominally passing (in the past or future) closer than 2 pc from the Sun. We used a full gravitational potential of the Galaxy to calculate the motion of the Sun and a star from their current positions to the proximity epoch. For these calculations, we used a numerical integration in rectangular, Galactocentric coordinates. We showed that in many cases the numerical integration of the star motion gives significantly different results than popular rectilinear approximation. We found several new stellar candidates for close visitors in past or in future. We used covariance matrices of the astrometric data for each star to estimate the accuracy of the obtained proximity distance and epoch. To this aim, we used a Monte Carlo method, replaced each star with 10 000 of its clones and studied the distribution of their individual close passages near the Sun. We showed that for contemporary close neighbours the precision is quite good, but for more distant stars it strongly depends on the quality of astrometric and kinematic data. Several examples are discussed in detail, among them the case of HIP 14473. For this star, we obtained the nominal proximity distance as small as 0.22 pc 3.78 Myr ago. However, there exists strong need for more precise astrometry of this star since the proximity point uncertainty is unacceptably large.
Accuracy of TCP performance models
NASA Astrophysics Data System (ADS)
Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.
2001-07-01
Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.
Size-Dependent Accuracy of Nanoscale Thermometers.
Alicki, Robert; Leitner, David M
2015-07-23
The accuracy of two classes of nanoscale thermometers is estimated in terms of size and system-dependent properties using the spin-boson model. We consider solid state thermometers, where the energy splitting is tuned by thermal properties of the material, and fluorescent organic thermometers, in which the fluorescence intensity depends on the thermal population of conformational states of the thermometer. The results of the theoretical model compare well with the accuracy reported for several nanothermometers that have been used to measure local temperature inside living cells.
Social class, contextualism, and empathic accuracy.
Kraus, Michael W; Côté, Stéphane; Keltner, Dacher
2010-11-01
Recent research suggests that lower-class individuals favor explanations of personal and political outcomes that are oriented to features of the external environment. We extended this work by testing the hypothesis that, as a result, individuals of a lower social class are more empathically accurate in judging the emotions of other people. In three studies, lower-class individuals (compared with upper-class individuals) received higher scores on a test of empathic accuracy (Study 1), judged the emotions of an interaction partner more accurately (Study 2), and made more accurate inferences about emotion from static images of muscle movements in the eyes (Study 3). Moreover, the association between social class and empathic accuracy was explained by the tendency for lower-class individuals to explain social events in terms of features of the external environment. The implications of class-based patterns in empathic accuracy for well-being and relationship outcomes are discussed. PMID:20974714
New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.
Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank
2012-03-01
Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report.
Metrical Patterns of Words and Production Accuracy.
ERIC Educational Resources Information Center
Schwartz, Richard G.; Goffman, Lisa
1995-01-01
This study examined the influence of metrical patterns (syllable stress and serial position) of words on the production accuracy of 20 children (ages 22 months to 28 months). Among results were that one-fourth of the initial unstressed syllables were omitted and that consonant omissions, though few, tended to occur in the initial position.…
The Accuracy of Academic Gender Stereotypes.
ERIC Educational Resources Information Center
Beyer, Sylvia
1999-01-01
Assessed the accuracy of academic gender stereotypes by asking 265 college students to estimate the percentage of male and female students and their grade point averages (GPAs) and comparing these to the actual percentage of male and female students and GPAs. Results show the inaccuracies of academic gender stereotypes. (SLD)
Accuracy of Information Processing under Focused Attention.
ERIC Educational Resources Information Center
Bastick, Tony
This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…
Accuracy of polyp localization at colonoscopy
O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald
2016-01-01
Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of
Two Different Methods for Numerical Solution of the Modified Burgers' Equation
Karakoç, Seydi Battal Gazi; Başhan, Ali; Geyikli, Turabi
2014-01-01
A numerical solution of the modified Burgers' equation (MBE) is obtained by using quartic B-spline subdomain finite element method (SFEM) over which the nonlinear term is locally linearized and using quartic B-spline differential quadrature (QBDQM) method. The accuracy and efficiency of the methods are discussed by computing L 2 and L ∞ error norms. Comparisons are made with those of some earlier papers. The obtained numerical results show that the methods are effective numerical schemes to solve the MBE. A linear stability analysis, based on the von Neumann scheme, shows the SFEM is unconditionally stable. A rate of convergence analysis is also given for the DQM. PMID:25162064
NASA Technical Reports Server (NTRS)
Karki, K. C.; Mongia, H. C.; Patankar, Suhas V.; Runchal, A. K.
1987-01-01
The objective of this effort is to develop improved numerical schemes for predicting combustor flow fields. Various candidate numerical schemes were evaluated, and promising schemes were selected for detailed assessment. The criteria for evaluation included accuracy, computational efficiency, stability, and ease of extension to multidimensions. The candidate schemes were assessed against a variety of simple one- and two-dimensional problems. These results led to the selection of the following schemes for further evaluation: flux spline schemes (linear and cubic) and controlled numerical diffusion with internal feedback (CONDIF). The incorporation of the flux spline scheme and direct solution strategy in a computer program for three-dimensional flows is in progress.
Numerical Speed of Sound and its Application to Schemes for all Speeds
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Edwards, Jack R.
1999-01-01
The concept of "numerical speed of sound" is proposed in the construction of numerical flux. It is shown that this variable is responsible for the accurate resolution of' discontinuities, such as contacts and shocks. Moreover, this concept can he readily extended to deal with low speed and multiphase flows. As a results, the numerical dissipation for low speed flows is scaled with the local fluid speed, rather than the sound speed. Hence, the accuracy is enhanced the correct solution recovered, and the convergence rate improved. We also emphasize the role of mass flux and analyze the behavior of this flux. Study of mass flux is important because the numerical diffusivity introduced in it can be identified. In addition, it is the term common to all conservation equations. We show calculated results for a wide variety of flows to validate the effectiveness of using the numerical speed of sound concept in constructing the numerical flux. We especially aim at achieving these two goals: (1) improving accuracy and (2) gaining convergence rates for all speed ranges. We find that while the performance at high speed range is maintained, the flux now has the capability of performing well even with the low: speed flows. Thanks to the new numerical speed of sound, the convergence is even enhanced for the flows outside of the low speed range. To realize the usefulness of the proposed method in engineering problems, we have also performed calculations for complex 3D turbulent flows and the results are in excellent agreement with data.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
A method for generating numerical pilot opinion ratings using the optimal pilot model
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
A method for generating numerical pilot opinion ratings using the optimal pilot model is introduced. The method is contained in a rating hypothesis which states that the numerical rating which a human pilot assigns to a specific vehicle and task can be directly related to the numerical value of the index of performance resulting from the optimal pilot modeling procedure as applied to that vehicle and task. The hypothesis is tested using the data from four piloted simulations. The results indicate that the hypothesis is reasonable, but that the predictive capability of the method is a strong function of the accuracy of the pilot model itself. This accuracy is, in turn, dependent upon the parameters which define the optimal modeling problem. A procedure for specifying the parameters for the optimal pilot model in the absence of experimental data is suggested.
Simulation of a numerical filter for enhancing earth radiation budget measurements
NASA Technical Reports Server (NTRS)
Green, R. N.
1981-01-01
The Earth Radiation Budget Experiment has the objective to collect the radiation budget data which are needed to determine the radiation budget at the top of the atmosphere (TOA) on a regional scale. A second objective is to determine the accuracy of the results. Three satellites will carry wide and medium field of view radiometers which measure the longwave and shortwave components of radiation. Scanning radiometers will be included to detect small spatial features. A proposal has been made to employ for the nonscanning radiometers a one-dimensional numerical filter which reduces satellite measurements to TOA radiant excitances. The numerical filter was initially formulated by House (1980). It enhances the resolution of the radiation budget along the satellite groundtrack. The accuracy of the numerical filter estimate is studied by simulating the data gathering and measurement inversion process. The results of the study are discussed, taking into account two error sources.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
High-accuracy deterministic solution of the Boltzmann equation for the shock wave structure
NASA Astrophysics Data System (ADS)
Malkov, E. A.; Bondar, Ye. A.; Kokhanchik, A. A.; Poleshkin, S. O.; Ivanov, M. S.
2015-07-01
A new deterministic method of solving the Boltzmann equation has been proposed. The method has been employed in numerical studies of the plane shock wave structure in a hard sphere gas. Results for Mach numbers and have been compared with predictions of the direct simulation Monte Carlo (DSMC) method, which has been used to obtain the reference solution. Particular attention in estimating the solution accuracy has been paid to a fine structural effect: the presence of a total temperature peak exceeding the temperature value further downstream. The results of solving the Boltzmann equation for the shock wave structure are in excellent agreement with the DSMC predictions.
High accuracy fine-pointing system - Breadboard performances and results
NASA Astrophysics Data System (ADS)
Fazilleau, Y.; Moreau, B.; Betermier, J. M.; Boutemy, J. C.
A fine pointing system designed according to the requirements of the Semiconductor Laser Intersatellite Link Experiment 1989 (SILEX 1989) is described, with particular attention given to the synthesis of the final breadboarding. The study includes all the pointing functions where the pointing, acquisition, and tracking (PAT) functions are associated with different FOVs. The laboratory model consists of a complete pointing system with two CCD sensors for detection, two general-scanning single-axis actuators, and the overall control electronics. Each major PAT function of the laboratory model was separately tested, giving all the major impacts for the future PAT applications concerning mechanical margins, optical aberrations, sensor linearity, and servoloop communications.
Frontiers in Numerical Relativity
NASA Astrophysics Data System (ADS)
Evans, Charles R.; Finn, Lee S.; Hobill, David W.
2011-06-01
Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics
Numerical solution of boundary-integral equations for molecular electrostatics.
Bardhan, J.; Mathematics and Computer Science; Rush Univ.
2009-03-07
Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032
NASA Astrophysics Data System (ADS)
Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.
2016-09-01
In the first part of the paper, a new method was developed for solving the inverse problem of coseismic and postseismic deformations in the real (imperfectly elastic, radially and horizontally heterogeneous, self-gravitating) Earth with hydrostatic initial stresses from highly accurate modern satellite data. The method is based on the decomposition of the sought parameters in the orthogonalized basis. The method was suggested for estimating the ambiguity of the solution of the inverse problem for coseismic and postseismic deformations. For obtaining this estimate, the orthogonal complement is constructed to the n-dimensional space spanned by the system of functional derivatives of the residuals in the system of n observed and model data on the coseismic and postseismic displacements at a variety of sites on the ground surface with small variations in the models. Below, we present the results of the numerical modeling of the elastic displacements of the ground surface, which were based on calculating Green's functions of the real Earth for the plane dislocation surface and different orientations of the displacement vector as described in part I of the paper. The calculations were conducted for the model of a horizontally homogeneous but radially heterogeneous selfgravitating Earth with hydrostatic initial stresses and the mantle rheology described by the Lomnitz logarithmic creep function according to (M. Molodenskii, 2014). We compare our results with the previous numerical calculations (Okado, 1985; 1992) for the simplest model of a perfectly elastic nongravitating homogeneous Earth. It is shown that with the source depths starting from the first hundreds of kilometers and with magnitudes of about 8.0 and higher, the discrepancies significantly exceed the errors of the observations and should therefore be taken into account. We present the examples of the numerical calculations of the creep function of the crust and upper mantle for the coseismic deformations. We
Guiding Center Equations of High Accuracy
R.B. White, G. Spizzo and M. Gobbin
2013-03-29
Guiding center simulations are an important means of predicting the effect of resistive and ideal magnetohydrodynamic instabilities on particle distributions in toroidal magnetically confined thermonuclear fusion research devices. Because saturated instabilities typically have amplitudes of δ B/B of a few times 10-4 numerical accuracy is of concern in discovering the effect of mode particle resonances. We develop a means of following guiding center orbits which is greatly superior to the methods currently in use. In the presence of ripple or time dependent magnetic perturbations both energy and canonical momentum are conserved to better than one part in 1014, and the relation between changes in canonical momentum and energy is also conserved to very high order.
Protostellar Jets: Numerical Simulations
NASA Astrophysics Data System (ADS)
Vitorino, B. F.; Jatenco-Pereira, V.; Opher, R.
1998-11-01
Numerical simulations of astrophysical jets have been made in order to study their collimation and internal structure. Recently Ouyed & Pudritz (1997) did numerical simulations of axi-simetric magnetocentrifugal jets from a keplerian acretion disk employing the eulerian finite difference code Zeus-2D. During their simulation, it was raised a steady state jet confirming a lot of results of the MHD winds steady state theory. Following this scenario we did tridimensional numerial simulations of this model allowing the jet, after a perturbation, evolve into a not steady state producing the helical features observed in some protostellar jets.
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by
Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen
2014-01-01
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282
ERIC Educational Resources Information Center
Soltesz, Fruzsina; Goswami, Usha; White, Sonia; Szucs, Denes
2011-01-01
Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the…
Seasonal Effects on GPS PPP Accuracy
NASA Astrophysics Data System (ADS)
Saracoglu, Aziz; Ugur Sanli, D.
2016-04-01
GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.
Proper installation ensures turbine meter accuracy
Peace, D.W.
1995-07-01
Turbine meters are widely used for natural gas measurement and provide high accuracy over large ranges of operation. However, as with many other types of flowmeters, consideration must be given to the design of the turbine meter and the installation piping practice to ensure high-accuracy measurement. National and international standards include guidelines for proper turbine meter installation piping and methods for evaluating the effects of flow disturbances on the design of those meters. Swirl or non-uniform velocity profiles, such as jetting, at the turbine meter inlet can cause undesirable accuracy performance changes. Sources of these types of flow disturbances can be from the installation piping configuration, an upstream regulator, a throttled valve, or a partial blockage upstream of the meter. Test results on the effects of swirl and jetting on different types of meter designs and sizes emphasize the need to consider good engineering design for turbine meters, including integral flow conditioning vanes and adequate installation piping practices for high accuracy measurement.
Data Accuracy in Citation Studies.
ERIC Educational Resources Information Center
Boyce, Bert R.; Banning, Carolyn Sue
1979-01-01
Four hundred eighty-seven citations of the 1976 issues of the Journal of the American Society for Information Science and the Personnel and Guidance Journal were checked for accuracy: total error was 13.6 percent and 10.7 percent, respectively. Error categories included incorrect author name, article/book title, journal title; wrong entry; and…
Nationwide forestry applications program. Analysis of forest classification accuracy
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)
1981-01-01
The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.
Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment
NASA Technical Reports Server (NTRS)
Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.
2012-01-01
Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.
Gusso, Michele
2008-01-28
A detailed study on the accuracy attainable with numerical atomic orbitals in the context of pseudopotential first-principles density functional theory is presented. Dimers of first- and second-row elements are analyzed: bond lengths, atomization energies, and Kohn-Sham eigenvalue spectra obtained with localized orbitals and with plane-wave basis sets are compared. For each dimer, the cutoff radius, the shape, and the number of the atomic basis orbitals are varied in order to maximize the accuracy of the calculations. Optimized atomic orbitals are obtained following two routes: (i) maximization of the projection of plane wave results into atomic orbital basis sets and (ii) minimization of the total energy with respect to a set of primitive atomic orbitals as implemented in the OPENMX software package. It is found that by optimizing the numerical basis, chemical accuracy can be obtained even with a small set of orbitals.
Spatial and numerical processing in children with high and low visuospatial abilities.
Crollen, Virginie; Noël, Marie-Pascale
2015-04-01
In the literature on numerical cognition, a strong association between numbers and space has been repeatedly demonstrated. However, only a few recent studies have been devoted to examine the consequences of low visuospatial abilities on calculation processing. In this study, we wanted to investigate whether visuospatial weakness may affect pure spatial processing as well as basic numerical reasoning. To do so, the performances of children with high and low visuospatial abilities were directly compared on different spatial tasks (the line bisection and Simon tasks) and numerical tasks (the number bisection, number-to-position, and numerical comparison tasks). Children from the low visuospatial group presented the classic Simon and SNARC (spatial numerical association of response codes) effects but showed larger deviation errors as compared with the high visuospatial group. Our results, therefore, demonstrated that low visuospatial abilities did not change the nature of the mental number line but rather led to a decrease in its accuracy. PMID:25618380
MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS
Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...
Chang, Hung-Tzu; Cheng, Yuan-Chung; Zhang, Pan-Pan
2013-12-14
The small polaron quantum master equation (SPQME) proposed by Jang et al. [J. Chem. Phys. 129, 101104 (2008)] is a promising approach to describe coherent excitation energy transfer dynamics in complex molecular systems. To determine the applicable regime of the SPQME approach, we perform a comprehensive investigation of its accuracy by comparing its simulated population dynamics with numerically exact quasi-adiabatic path integral calculations. We demonstrate that the SPQME method yields accurate dynamics in a wide parameter range. Furthermore, our results show that the accuracy of polaron theory depends strongly upon the degree of exciton delocalization and timescale of polaron formation. Finally, we propose a simple criterion to assess the applicability of the SPQME theory that ensures the reliability of practical simulations of energy transfer dynamics with SPQME in light-harvesting systems.
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics.
Chen, Hsing-Ta; Berkelbach, Timothy C; Reichman, David R
2016-04-21
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin-boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we present is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques. PMID:27389208
Accuracy of a bistatic scattering substitution technique for calibration of focused receivers
Rich, Kyle T.; Mast, T. Douglas
2015-01-01
A recent method for calibrating single-element, focused passive cavitation detectors (PCD) compares bistatic scattering measurements by the PCD and a reference hydrophone. Here, effects of scatterer properties and PCD size on frequency-dependent receive calibration accuracy are investigated. Simulated scattering from silica and polystyrene spheres was compared for small hydrophone and spherically focused PCD receivers to assess the achievable calibration accuracy as a function of frequency, scatterer size, and PCD size. Good agreement between measurements was found when the scatterer diameter was sufficiently smaller than the focal beamwidth of the PCD; this relationship was dependent on the scatterer material. For conditions that result in significant disagreement between measurements, the numerical methods described here can be used to correct experimental calibrations. PMID:26627816
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Technical Reports Server (NTRS)
Chakravarthy, S. R.; Osher, S.
1985-01-01
A new family of high accuracy Total Variation Diminishing (TVD) schemes has been developed. Members of the family include the conventional second-order TVD upwind scheme, various other second-order accurate TVD schemes with lower truncation error, and even a third-order accurate TVD approximation. All the schemes are defined with a five-point grid bandwidth. In this paper, the new algorithms are described for scalar equations, systems, and arbitrary coordinates. Selected numerical results are provided to illustrate the new algorithms and their properties.
Do saccharide doped PAGAT dosimeters increase accuracy?
NASA Astrophysics Data System (ADS)
Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.
2015-01-01
To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.
Effects of CT image segmentation methods on the accuracy of long bone 3D reconstructions.
Rathnayaka, Kanchana; Sahama, Tony; Schuetz, Michael A; Schmutz, Beat
2011-03-01
An accurate and accessible image segmentation method is in high demand for generating 3D bone models from CT scan data, as such models are required in many areas of medical research. Even though numerous sophisticated segmentation methods have been published over the years, most of them are not readily available to the general research community. Therefore, this study aimed to quantify the accuracy of three popular image segmentation methods, two implementations of intensity thresholding and Canny edge detection, for generating 3D models of long bones. In order to reduce user dependent errors associated with visually selecting a threshold value, we present a new approach of selecting an appropriate threshold value based on the Canny filter. A mechanical contact scanner in conjunction with a microCT scanner was utilised to generate the reference models for validating the 3D bone models generated from CT data of five intact ovine hind limbs. When the overall accuracy of the bone model is considered, the three investigated segmentation methods generated comparable results with mean errors in the range of 0.18-0.24 mm. However, for the bone diaphysis, Canny edge detection and Canny filter based thresholding generated 3D models with a significantly higher accuracy compared to those generated through visually selected thresholds. This study demonstrates that 3D models with sub-voxel accuracy can be generated utilising relatively simple segmentation methods that are available to the general research community.
Empirical Accuracies of U.S. Space Surveillance Network Reentry Predictions
NASA Technical Reports Server (NTRS)
Johnson, Nicholas L.
2008-01-01
The U.S. Space Surveillance Network (SSN) issues formal satellite reentry predictions for objects which have the potential for generating debris which could pose a hazard to people or property on Earth. These prognostications, known as Tracking and Impact Prediction (TIP) messages, are nominally distributed at daily intervals beginning four days prior to the anticipated reentry and several times during the final 24 hours in orbit. The accuracy of these messages depends on the nature of the satellite s orbit, the characteristics of the space vehicle, solar activity, and many other factors. Despite the many influences on the time and the location of reentry, a useful assessment of the accuracies of TIP messages can be derived and compared with the official accuracies included with each TIP message. This paper summarizes the results of a study of numerous uncontrolled reentries of spacecraft and rocket bodies from nearly circular orbits over a span of several years. Insights are provided into the empirical accuracies and utility of SSN TIP messages.
Thermocouple Calibration and Accuracy in a Materials Testing Laboratory
NASA Technical Reports Server (NTRS)
Lerch, B. A.; Nathal, M. V.; Keller, D. J.
2002-01-01
A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.
COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE
Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov
2011-08-10
Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the
On the Accuracy of Genomic Selection
Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte
2016-01-01
Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178
On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.
1990-01-01
An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.
Numerical integration of orbits of planetary satellites.
NASA Astrophysics Data System (ADS)
Hadjifotinou, K. G.; Harper, D.
1995-11-01
The 10th-order Gauss-Jackson backward difference numerical integration method and the Runge-Kutta-Nystroem RKN12(10)17M method were applied to the equations of motion and variational equations of the Saturnian satellite system. We investigated the effect of step-size on the stability of the Gauss-Jackson method in the two distinct cases arising from the inclusion or exclusion of the corrector cycle in the integration of the variational equations. In the predictor-only case, we found that instability occurred when the step-size was greater than approximately 1/76 of the orbital period of the innermost satellite. In the predictor-corrector case, no such instability was observed, but larger step-sizes yield significant loss in accuracy. By contrast, the investigation of the Runge-Kutta-Nystroem method showed that it allows the use of much larger step-sizes and can still obtain high-accuracy results, thus making evident the superiority of the method for the integration of planetary satellite systems.
Accuracy of NHANES periodontal examination protocols.
Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A
2010-11-01
This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research.
Accuracy of forecasts in strategic intelligence
Mandel, David R.; Barnes, Alan
2014-01-01
The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176
Positional Accuracy Assessment of Googleearth in Riyadh
NASA Astrophysics Data System (ADS)
Farah, Ashraf; Algarni, Dafer
2014-06-01
Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.
Piezoresistive position microsensors with ppm-accuracy
NASA Astrophysics Data System (ADS)
Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav
2015-05-01
In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.
Ultrasonic flowmeters undergo accuracy, repeatability tests
Grimley, T.A.
1996-12-23
Two commercially available multipath ultrasonic flowmeters have undergone tests at Gas Research Institute`s metering research facility (MRF) at Southwest Research institute in San Antonio. The tests were conducted in baseline and disturbed-flow installations to assess baseline accuracy and repeatability over a range of flowrates and pressures. Results show the test meters are capable of accuracies within a 1% tolerance and with repeatability of better than 0.25% when the flowrate is greater than about 5% of capacity. The data also indicates that pressure may have an effect on meter error. Results further suggest that both the magnitude and character of errors introduced by flow disturbances are a function of meter design. Shifts of up to 0.6% were measured for meters installed 10D from a tee (1D = 1 pipe diameter). Better characterization of the effects of flow disturbances on measurement accuracy is needed to define more accurately the upstream piping requirements necessary to achieve meter performance within a specified tolerance. The paper discusses reduced station costs, test methods, baseline tests, effect of pressure, speed of sound, and disturbance tests.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Numerical simulation of dusty plasmas
Winske, D.
1995-09-01
The numerical simulation of physical processes in dusty plasmas is reviewed, with emphasis on recent results and unresolved issues. Three areas of research are discussed: grain charging, weak dust-plasma interactions, and strong dust-plasma interactions. For each area, we review the basic concepts that are tested by simulations, present some appropriate examples, and examine numerical issues associated with extending present work.
Assessment of the Thematic Accuracy of Land Cover Maps
NASA Astrophysics Data System (ADS)
Höhle, J.
2015-08-01
Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.
Combined numerical techniques for calculation of light and temperature distribution
NASA Astrophysics Data System (ADS)
Scherbakov, Yury N.; Yakunin, Alexander N.; Yaroslavsky, Ilya V.; Tuchin, Valery V.
1994-06-01
Absence of satisfactory criteria for discrete model parameters choice during computer modeling of thermal processes of laser-biotissue interaction may be the premier sign for accuracy of numerical results obtained. The approach realizing the new concept of direct automatical adaptive grid construction is suggested. The intellectual program provides high calculation accuracy and is simple in practical usage so that a physician receives the ability to prescribe treatment without any assistance of a specialist in mathematical modeling. The real possibility of controlling of the hyperthermia processes exists: the changes of hyperthermia region volume, of its depth and of the temperature levels are possible by means of changing of free convection boundary conditions on the tissue outer surface, of the power, the radius and the shape of laser beam.
Interpersonal Deception: V. Accuracy in Deception Detection.
ERIC Educational Resources Information Center
Burgoon, Judee K.; And Others
1994-01-01
Investigates the influence of several factors on accuracy in detecting truth and deceit. Found that accuracy was much higher on truth than deception, novices were more accurate than experts, accuracy depended on type of deception and whether suspicion was present or absent, suspicion impaired accuracy for experts, and questions strategy…
On the numerical computation of nonlinear force-free magnetic fields
NASA Technical Reports Server (NTRS)
Wu, S. T.; Chang, H. M.; Hagyard, M. J.
1985-01-01
An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from a source surface, given the proper boundary conditions. The results of this work; describing the mathematical formalism that was developed, the numerical techniques employed, and the stability criteria developed for these numerical schemes are presented. An analytical solution is used for a test case; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent ( 5%).
Total Variation Diminishing (TVD) schemes of uniform accuracy
NASA Technical Reports Server (NTRS)
Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.
1988-01-01
Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.
Accuracy enhancements for overset grids using a defect correction approach
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Pulliam, Thomas H.
1994-01-01
A defect-correction approach is investigated as a means of enhancing the accuracy of flow computations on overset grids. Typically, overset-grid techniques process and pass information only at grid boundaries. In the current approach, error corrections at all overlapped interior points are injected between grids by using a defect-correction scheme. In some cases this is found to enhance the overall accuracy of the overset-grid method. Locally refined overset grids can be used to provide an efficient solution-adaptation method. The defect correction can also be ultilized as an error-correction technique for a coarse grid by evaluating the residual using a fine base grid, but solving the implicit equations only on the coarse grid. Numerical examples include an accuracy and dissipation study of an unsteady decaying vortex flow, the flow over a NACA 0012 airfoil, and the flow over a mulit-element high-lift airfoil.
Accurate numerical simulation of short fiber optical parametric amplifiers.
Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G
2008-03-17
We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.
Numerical solution of a semilinear elliptic equation via difference scheme
NASA Astrophysics Data System (ADS)
Beigmohammadi, Elif Ozturk; Demirel, Esra
2016-08-01
We consider the Bitsadze-Samarskii type nonlocal boundary value problem { -d/2v (t ) d t2 +B v (t ) =h (t ,v (t ) ) ,0
Numerical studies of the stochastic Korteweg-de Vries equation
Lin Guang; Grinberg, Leopold; Karniadakis, George Em . E-mail: gk@dam.brown.edu
2006-04-10
We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation.
Calculation of free-fall trajectories using numerical optimization methods.
NASA Technical Reports Server (NTRS)
Hull, D. G.; Fowler, W. T.; Gottlieb, R. G.
1972-01-01
An important problem in space flight is the calculation of trajectories for nonthrusting vehicles between fixed points in a given time. A new procedure based on Hamilton's principle for solving such two-point boundary-value problems is presented. It employs numerical optimization methods to perform the extremization required by Hamilton's principle. This procedure is applied to the calculation of an Earth-Moon trajectory. The results show that the initial guesses required to obtain an iteration procedure which converges are not critical and that convergence can be obtained to any predetermined degree of accuracy.
Projected discrete ordinates methods for numerical transport problems
Larsen, E.W.
1985-01-01
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Numerical computation of gravitational field for general axisymmetric objects
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2016-10-01
We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (i) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (ii) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (i) finite uniform objects covering rhombic spindles and circular toroids, (ii) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (iii) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.
A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.
Ling, Hong; Luo, Ercang; Dai, Wei
2006-12-22
Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy. PMID:16996099
A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.
Ling, Hong; Luo, Ercang; Dai, Wei
2006-12-22
Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.
Representing Functions in n Dimensions to Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2007-01-01
A method of approximating a scalar function of n independent variables (where n is a positive integer) to arbitrary accuracy has been developed. This method is expected to be attractive for use in engineering computations in which it is necessary to link global models with local ones or in which it is necessary to interpolate noiseless tabular data that have been computed from analytic functions or numerical models in n-dimensional spaces of design parameters.
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Modeling Individual Differences in Response Time and Accuracy in Numeracy
Ratcliff, Roger; Thompson, Clarissa A.; McKoon, Gail
2015-01-01
In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/ accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy
Determining gas-meter accuracy
Valenti, M.
1997-03-01
This article describes how engineers at the Metering Research Facility are helping natural-gas companies improve pipeline efficiency by evaluating and refining the instruments used for measuring and setting prices. Accurate metering of natural gas is more important than ever as deregulation subjects pipeline companies to competition. To help improve that accuracy, the Gas Research Institute (GRI) in Chicago has sponsored the Metering Research Facility (MRF) at the Southwest Research Institute (SWRI) in San Antonio, Tex. The MRF evaluates and improves the performance of orifice, turbine, diaphragm, and ultrasonic meters as well as the gas-sampling methods that pipeline companies use to measure the flow of gas and determine its price.
NASA Astrophysics Data System (ADS)
Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi
2016-01-01
In recent decades numerical models have been developed and extensively used for landslide hazard and risk assessment. The reliability of the outcomes of these numerical simulations must be evaluated carefully as it mainly depends on the soundness of the physical model of the landslide that in turn often requires the integration of several surface and subsurface surveys in order to achieve a satisfactory spatial resolution. Merging diverse sources of data may be particularly complex for large landslides, because of intrinsic heterogeneity and possible great data uncertainty. In this paper, we assess the spatial scale and data accuracy required for effective numerical landslide modeling. We focus on two particular aspects: the model extent and the accuracy of input datasets. The Ronco landslide, a deep-seated gravitational slope deformation (DSGSD) located in the North of Italy, was used as a test-bed. Geological, geomorphological and geophysical data were combined and, as a result, eight models with different spatial scales and data accuracies were obtained. The models were used to run a back analysis of an event in 2002, during which part of the slope moved after intense rainfalls. The results point to the key role of a proper geomorphological zonation to properly set the model extent. The accuracy level of the input datasets should also be tuned. We suggest applying the approach presented here to other DSGSDs with different geological and geomorphological settings to test the reliability of our findings.
Ground Truth Accuracy Tests of GPS Seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.
2005-12-01
As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.
Arizona Vegetation Resource Inventory (AVRI) accuracy assessment
Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.
1982-01-01
A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.
Time-Space Decoupled Explicit Method for Fast Numerical Simulation of Tsunami Propagation
NASA Astrophysics Data System (ADS)
Guo, Anxin; Xiao, Shengchao; Li, Hui
2015-02-01
This study presents a novel explicit numerical scheme for simulating tsunami propagation using the exact solution of the wave equations. The objective of this study is to develop a fast and stable numerical scheme by decoupling the wave equation in both the time and space domains. First, the finite difference scheme of the shallow-water equations for tsunami simulation are briefly introduced. The time-space decoupled explicit method based on the exact solution of the wave equation is given for the simulation of tsunami propagation without including frequency dispersive effects. Then, to consider wave dispersion, the second-order accurate numerical scheme to solve the shallow-water equations, which mimics the physical frequency dispersion with numerical dispersion, is derived. Lastly, the computation efficiency and the accuracy of the two types of numerical schemes are investigated by the 2004 Indonesia tsunami and the solution of the Boussinesq equation for a tsunami with Gaussian hump over both uniform and varying water depths. The simulation results indicate that the proposed numerical scheme can achieve a fast and stable tsunami propagation simulation while maintaining computation accuracy.
Numerical recipes for mold filling simulation
Kothe, D.; Juric, D.; Lam, K.; Lally, B.
1998-07-01
Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.
Numerical evaluation of uniform beam modes.
Tang, Y.; Reactor Analysis and Engineering
2003-12-01
The equation for calculating the normal modes of a uniform beam under transverse free vibration involves the hyperbolic sine and cosine functions. These functions are exponential growing without bound. Tables for the natural frequencies and the corresponding normal modes are available for the numerical evaluation up to the 16th mode. For modes higher than the 16th, the accuracy of the numerical evaluation will be lost due to the round-off errors in the floating-point math imposed by the digital computers. Also, it is found that the functions of beam modes commonly presented in the structural dynamics books are not suitable for numerical evaluation. In this paper, these functions are rearranged and expressed in a different form. With these new equations, one can calculate the normal modes accurately up to at least the 100th mode. Mike's Arbitrary Precision Math, an arbitrary precision math library, is used in the paper to verify the accuracy.
Direct Numerical Simulation of Automobile Cavity Tones
NASA Technical Reports Server (NTRS)
Kurbatskii, Konstantin; Tam, Christopher K. W.
2000-01-01
The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
[Ovarian tumours--accuracy of frozen section diagnosis].
Ivanov, S; Ivanov, S; Khadzhiolov, N
2005-01-01
A retrospective study of 450 ovarian biopsy results were examined for the period of 1998 till 2004 to evaluate the accuracy of frozen section diagnosis. In addition to this we performed a review of the literature for all previous studies in this field in order to study the accuracy rates of the different clinics throughout the world. The histhopathological results of the frozen section diagnosis were equal with the diagnosis of the paraffin blocks in 90%. The sensitivity rates for benign, malignant and borderline tumours, were 96%, 84% and 60% respectively. We had 10 patients (2,1%) false-positive results (overdiagnosed) and 26 (5,2%) false-negative results (underdiagnosed) in frozen section examinations. Frozen section examination of mucinous tumours showed hogher underdiagnosis--18%. The review of the literature showed that there is no significant difference in accuracy rates of frozen section diagnosis for benign and malignant ovarian tumours in relation with time. We found low accuracy rates for borderline tumours which was similar with most of the foreign publications. However the accuracy of the frozen section diagnosis is bettering with the time. As a result of this we conclude that the accuracy rates of the frozen section diagnosis for evaluation of the malignant and benign tumours is quite enough for correct diagnosis. Since accuracy rates for borderline ovarian tumours are low we have to take care and attention of improvement in this field.
A benchmark study of numerical schemes for one-dimensional arterial blood flow modelling.
Boileau, Etienne; Nithiarasu, Perumal; Blanco, Pablo J; Müller, Lucas O; Fossan, Fredrik Eikeland; Hellevik, Leif Rune; Donders, Wouter P; Huberts, Wouter; Willemet, Marie; Alastruey, Jordi
2015-10-01
Haemodynamical simulations using one-dimensional (1D) computational models exhibit many of the features of the systemic circulation under normal and diseased conditions. Recent interest in verifying 1D numerical schemes has led to the development of alternative experimental setups and the use of three-dimensional numerical models to acquire data not easily measured in vivo. In most studies to date, only one particular 1D scheme is tested. In this paper, we present a systematic comparison of six commonly used numerical schemes for 1D blood flow modelling: discontinuous Galerkin, locally conservative Galerkin, Galerkin least-squares finite element method, finite volume method, finite difference MacCormack method and a simplified trapezium rule method. Comparisons are made in a series of six benchmark test cases with an increasing degree of complexity. The accuracy of the numerical schemes is assessed by comparison with theoretical results, three-dimensional numerical data in compatible domains with distensible walls or experimental data in a network of silicone tubes. Results show a good agreement among all numerical schemes and their ability to capture the main features of pressure, flow and area waveforms in large arteries. All the information used in this study, including the input data for all benchmark cases, experimental data where available and numerical solutions for each scheme, is made publicly available online, providing a comprehensive reference data set to support the development of 1D models and numerical schemes.
Pan, Xintian; Zhang, Luming
2016-01-01
In this article, we develop a high-order efficient numerical scheme to solve the initial-boundary problem of the MRLW equation. The method is based on a combination between the requirement to have a discrete counterpart of the conservation of the physical "energy" of the system and finite difference method. The scheme consists of a fourth-order compact finite difference approximation in space and a version of the leap-frog scheme in time. The unique solvability of numerical solutions is shown. A priori estimate and fourth-order convergence of the finite difference approximate solution are discussed by using discrete energy method and some techniques of matrix theory. Numerical results are given to show the validity and the accuracy of the proposed method. PMID:27217989
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
NASA Astrophysics Data System (ADS)
Magnoli, M. V.; Maiwald, M.
2014-03-01
Francis turbines have been running more and more frequently in part load conditions, in order to satisfy the new market requirements for more dynamic and flexible energy generation, ancillary services and grid regulation. The turbines should be able to be operated for longer durations with flows below the optimum point, going from part load to deep part load and even speed-no-load. These operating conditions are characterised by important unsteady flow phenomena taking place at the draft tube cone and in the runner channels, in the respective cases of part load and deep part load. The current expectations are that new Francis turbines present appropriate hydraulic stability and moderate pressure pulsations at overload, part load, deep part load and speed-no-load with high efficiency levels at normal operating range. This study presents series of investigations performed by Voith Hydro with the objective to improve the hydraulic stability of Francis turbines at overload, part load and deep part load, reduce pressure pulsations and enlarge the know-how about the transient fluid flow through the turbine at these challenging conditions. Model test measurements showed that distinct runner designs were able to influence the pressure pulsation level in the machine. Extensive experimental investigations focused on the runner deflector geometry, on runner features and how they could reduce the pressure oscillation level. The impact of design variants and machine configurations on the vortex rope at the draft tube cone at overload and part load and on the runner channel vortex at deep part load were experimentally observed and evaluated based on the measured pressure pulsation amplitudes. Numerical investigations were employed for improving the understanding of such dynamic fluid flow effects. As example for the design and experimental investigations, model test observations and pressure pulsation curves for Francis machines in mid specific speed range, around nqopt = 50 min
Convergence and accuracy of kernel-based continuum surface tension models
Williams, M.W.; Kothe, D.B.; Puckett, E.G.
1998-12-01
Numerical models for flows of immiscible fluids bounded by topologically complex interfaces possessing surface tension inevitably start with an Eulerian formulation. Here the interface is represented as a color function that abruptly varies from one constant value to another through the interface. This transition region, where the color function varies, is a thin O(h) band along the interface where surface tension forces are applied in continuum surface tension models. Although these models have been widely used since the introduction of the popular CSF method [BKZ92], properties such as absolute accuracy and uniform convergence are often not exhibited in interfacial flow simulations. These properties are necessary if surface tension-driven flows are to be reliably modeled, especially in three dimensions. Accuracy and convergence remain elusive because of difficulties in estimating first and second order spatial derivatives of color functions with abrupt transition regions. These derivatives are needed to approximate interface topology such as the unit normal and mean curvature. Modeling challenges are also presented when formulating the actual surface tension force and its local variation using numerical delta functions. In the following they introduce and incorporate kernels and convolution theory into continuum surface tension models. Here they convolve the discontinuous color function into a mollified function that can support accurate first and second order spatial derivatives. Design requirements for the convolution kernel and a new hybrid mix of convolution and discretization are discussed. The resulting improved estimates for interface topology, numerical delta functions, and surface force distribution are evidenced in an equilibrium static drop simulation where numerically-induced artificial parasitic currents are greatly mitigated.
NASA Astrophysics Data System (ADS)
Hsieh, Chi-Ti; Hsieh, Tung-Han; Chang, Shu-Wei
2016-03-01
The spatial discontinuity of physical parameters at an abrupt interface may increase numerical errors when solving partial differential equations. Rather than generating boundary-adapted meshes for objects with complicated geometry in the finite-element method, the subpixel smoothing (SPS) replaces discontinuous parameters inside square elements that are bisected by interfaces in, for example, the finite-difference (FD) method, with homogeneous counterparts and matches physical boundary conditions therein. In this work, we apply the idea of SPS to the eight-band effective-mass Luttinger-Kohn (LK) and Burt-Foreman (BF) Hamiltonians of semiconductor nanostructures. Two smoothing approaches are proposed. One stems from eliminations of the first-order perturbation in energy, and the other is an application of the Hellmann-Feynman (HF) theorem. We employ the FD method to numerically solve the eigenvalue problem corresponding to the multiband Schrodinger's equation for circular quantum wires (QWRs). The eigen-energies and envelope (wave) functions for valence and conduction states in III-V circular QWRs are examined. We find that while the procedure of perturbation theory seems to have the better accuracy than that of HF theorem, the errors of both schemes are considerably lower than that without smoothing or with direct but unjustified averages of parameters. On the other hand, even in the presence of SPS, the numerical results for the LK Hamiltonian of nanostructures could still contain nonphysical spurious solutions with extremely localized states near heterostructure interfaces. The proper operator ordering embedded in the BF Hamiltonian mitigates this problem. The proposed approaches may improve numerical accuracies and reduce computational cost for the modeling of nanostructures in optoelectronic devices.
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Ren, Liqiang; Li, Zheng; Ghani, Muhammad U; Hao, Ting; Liu, Hong
2015-01-01
The modulation transfer function (MTF) of a radiographic system is often evaluated by measuring the system's edge spread function (ESF) using edge device. However, the numerical differentiation procedure of the traditional slanted edge method amplifies noises in the line spread function (LSF) and limits the accuracy of the MTF measurement at low frequencies. The purpose of this study is to improve the accuracy of low-frequency MTF measurement for digital x-ray imaging systems. An edge spread function (ESF) deconvolution technique was developed for MTF measurement based on the degradation model of slanted edge images. Specifically, symmetric oversampled ESFs were constructed by subtracting a shifted version of the ESF from the original one. For validation, the proposed MTF technique was compared with conventional slanted edge method through computer simulations as well as experiments on two digital radiography systems. The simulation results show that the average errors of the proposed ESF deconvolution technique were 0.11% ± 0.09% and 0.23% ± 0.14%, and they outperformed the conventional edge method (0.64% ± 0.57% and 1.04% ± 0.82% respectively) at low-frequencies. On the experimental edge images, the proposed technique achieved better uncertainty performance than the conventional method. As a result, both computer simulation and experiments have demonstrated that the accuracy of MTF measurement at low frequencies can be improved by using the proposed ESF deconvolution technique. PMID:26410662
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Cochrane diagnostic test accuracy reviews.
Leeflang, Mariska M G; Deeks, Jonathan J; Takwoingi, Yemisi; Macaskill, Petra
2013-10-07
In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration.
Accuracy of Pressure Sensitive Paint
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Guille, M.; Sullivan, J. P.
2001-01-01
Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.
Quantitative analysis of numerical solvers for oscillatory biomolecular system models
Quo, Chang F; Wang, May D
2008-01-01
Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible
Modeling versus accuracy in EEG and MEG data
Mosher, J.C.; Huang, M.; Leahy, R.M.; Spencer, M.E.
1997-07-30
The widespread availability of high-resolution anatomical information has placed a greater emphasis on accurate electroencephalography and magnetoencephalography (collectively, E/MEG) modeling. A more accurate representation of the cortex, inner skull surface, outer skull surface, and scalp should lead to a more accurate forward model and hence improve inverse modeling efforts. The authors examine a few topics in this paper that highlight some of the problems of forward modeling, then discuss the impacts these results have on the inverse problem. The authors begin by assuming a perfect head model, that of the sphere, then show the lower bounds on localization accuracy of dipoles within this perfect forward model. For more realistic anatomy, the boundary element method (BEM) is a common numerical technique for solving the boundary integral equations. For a three-layer BEM, the computational requirements can be too intensive for many inverse techniques, so they examine a few simplifications. They quantify errors in generating this forward model by defining a regularized percentage error metric. The authors then apply this metric to a single layer boundary element solution, a multiple sphere approach, and the common single sphere model. They conclude with an MEG localization demonstration on a novel experimental human phantom, using both BEM and multiple spheres.
Modal wavefront reconstruction over general shaped aperture by numerical orthogonal polynomials
NASA Astrophysics Data System (ADS)
Ye, Jingfei; Li, Xinhua; Gao, Zhishan; Wang, Shuai; Sun, Wenqing; Wang, Wei; Yuan, Qun
2015-03-01
In practical optical measurements, the wavefront data are recorded by pixelated imaging sensors. The closed-form analytical base polynomial will lose its orthogonality in the discrete wavefront database. For a wavefront with an irregularly shaped aperture, the corresponding analytical base polynomials are laboriously derived. The use of numerical orthogonal polynomials for reconstructing a wavefront with a general shaped aperture over the discrete data points is presented. Numerical polynomials are orthogonal over the discrete data points regardless of the boundary shape of the aperture. The performance of numerical orthogonal polynomials is confirmed by theoretical analysis and experiments. The results demonstrate the adaptability, validity, and accuracy of numerical orthogonal polynomials for estimating the wavefront over a general shaped aperture from regular boundary to an irregular boundary.
Adaptive numerical competency in a food-hoarding songbird.
Hunt, Simon; Low, Jason; Burns, K C
2008-10-22
Most animals can distinguish between small quantities (less than four) innately. Many animals can also distinguish between larger quantities after extensive training. However, the adaptive significance of numerical discriminations in wild animals is almost completely unknown. We conducted a series of experiments to test whether a food-hoarding songbird, the New Zealand robin Petroica australis, uses numerical judgements when retrieving and pilfering cached food. Different numbers of mealworms were presented sequentially to wild birds in a pair of artificial cache sites, which were then obscured from view. Robins frequently chose the site containing more prey, and the accuracy of their number discriminations declined linearly with the total number of prey concealed, rising above-chance expectations in trials containing up to 12 prey items. A series of complementary experiments showed that these results could not be explained by time, volume, orientation, order or sensory confounds. Lastly, a violation of expectancy experiment, in which birds were allowed to retrieve a fraction of the prey they were originally offered, showed that birds searched for longer when they expected to retrieve more prey. Overall results indicate that New Zealand robins use a sophisticated numerical sense to retrieve and pilfer stored food, thus providing a critical link in understanding the evolution of numerical competency.
[History, accuracy and precision of SMBG devices].
Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V
2003-04-01
Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.
Numerical simulations with a First order BSSN formulation of Einstein's field equations
NASA Astrophysics Data System (ADS)
Brown, David; Diener, Peter; Field, Scott; Hesthaven, Jan; Herrmann, Frank; Mroue, Abdul; Sarbach, Olivier; Schnetter, Erik; Tiglio, Manuel; Wagman, Michael
2012-03-01
We present a new fully first order strongly hyperbolic representation of the BSSN formulation of Einstein's equations with optional constraint damping terms. In particular, we describe the characteristic fields of the system, discuss its hyperbolicity properties, and present two numerical implementations and simulations: one using finite differences, adaptive mesh refinement and in particular binary black holes, and another one using the discontinuous Galerkin method in spherical symmetry. These results constitute a first step in an effort to combine the robustness of BSSN evolutions with very high accuracy numerical techniques, such as spectral collocation multi-domain or discontinuous Galerkin methods.
Numerical simulations with a first-order BSSN formulation of Einstein's field equations
NASA Astrophysics Data System (ADS)
Brown, J. David; Diener, Peter; Field, Scott E.; Hesthaven, Jan S.; Herrmann, Frank; Mroué, Abdul H.; Sarbach, Olivier; Schnetter, Erik; Tiglio, Manuel; Wagman, Michael
2012-04-01
We present a new fully first-order strongly hyperbolic representation of the Baumgarte-Shapiro-Shibata-Nakamura formulation of Einstein’s equations with optional constraint damping terms. We describe the characteristic fields of the system, discuss its hyperbolicity properties, and present two numerical implementations and simulations: one using finite differences, adaptive mesh refinement, and, in particular, binary black holes, and another one using the discontinuous Galerkin method in spherical symmetry. The results of this paper constitute a first step in an effort to combine the robustness of Baumgarte-Shapiro-Shibata-Nakamura evolutions with very high accuracy numerical techniques, such as spectral collocation multidomain or discontinuous Galerkin methods.
NASA Technical Reports Server (NTRS)
Han, S. M.; Wu, S. T.; Nakagawa, Y.
1982-01-01
Radial propagation of one-dimensional magnetohydrodynamic (MHD) waves are analyzed numerically on the basis of the Implicit-Continuous-Fluid-Eulerian (ICE) scheme. Accuracy of the numerical method and other properties are tested through the study of MHD wave propagation. The three different modes of MHD waves (i.e., fast-, slow- and Alfven (transverse) mode) are generated by applying physically consistent boundary perturbations derived from MHD compatibility relations. It is shown that the resulting flow following these waves depend upon the relative configurations of the initial magnetic field and boundary perturbations.