Science.gov

Sample records for accuracy numerical results

  1. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  2. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  3. Quantifying Numerical Model Accuracy and Variability

    NASA Astrophysics Data System (ADS)

    Montoya, L. H.; Lynett, P. J.

    2015-12-01

    The 2011 Tohoku tsunami event has changed the logic on how to evaluate tsunami hazard on coastal communities. Numerical models are a key component for methodologies used to estimate tsunami risk. Model predictions are essential for the development of Tsunami Hazard Assessments (THA). By better understanding model bias and uncertainties and if possible minimizing them, a more accurate and reliable THA will result. In this study we compare runup height, inundation lines and flow velocity field measurements between GeoClaw and the Method Of Splitting Tsunami (MOST) predictions in the Sendai plain. Runup elevation and average inundation distance was in general overpredicted by the models. However, both models agree relatively well with each other when predicting maximum sea surface elevation and maximum flow velocities. Furthermore, to explore the variability and uncertainties in numerical models, MOST is used to compare predictions from 4 different grid resolutions (30m, 20m, 15m and 12m). Our work shows that predictions of particular products (runup and inundation lines) do not require the use of high resolution (less than 30m) Digital Elevation Maps (DEMs). When predicting runup heights and inundation lines, numerical convergence was achieved using the 30m resolution grid. On the contrary, poor convergence was found in the flow velocity predictions, particularly the 1 meter depth maximum flow velocities. Also, runup height measurements and elevations from the DEM were used to estimate model bias. The results provided in this presentation will help understand the uncertainties in model predictions and locate possible sources of errors within a model.

  4. Accuracy improvement in digital holographic microtomography by multiple numerical reconstructions

    NASA Astrophysics Data System (ADS)

    Ma, Xichao; Xiao, Wen; Pan, Feng

    2016-11-01

    In this paper, we describe a method to improve the accuracy in digital holographic microtomography (DHMT) for measurement of thick samples. Two key factors impairing the accuracy, the deficiency of depth of focus and the rotational error, are considered and addressed simultaneously. The hologram is propagated to a series of distances by multiple numerical reconstructions so as to extend the depth of focus. The correction of the rotational error, implemented by numerical refocusing and image realigning, is merged into the computational process. The method is validated by tomographic results of a four-core optical fiber and a large mode optical crystal fiber. A sample as thick as 258 μm is accurately reconstructed and the quantitative three-dimensional distribution of refractive index is demonstrated.

  5. Results from Numerical General Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2011-01-01

    For several years numerical simulations have been revealing the details of general relativity's predictions for the dynamical interactions of merging black holes. I will review what has been learned of the rich phenomenology of these mergers and the resulting gravitational wave signatures. These wave forms provide a potentially observable record of the powerful astronomical events, a central target of gravitational wave astronomy. Asymmetric radiation can produce a thrust on the system which may accelerate the single black hole resulting from the merger to high relative velocity.

  6. Numerical experiments on the accuracy of ENO and modified ENO schemes

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.

  7. Accuracy of results with NASTRAN modal synthesis

    NASA Technical Reports Server (NTRS)

    Herting, D. N.

    1978-01-01

    A new method for component mode synthesis was developed for installation in NASTRAN level 17.5. Results obtained from the new method are presented, and these results are compared with existing modal synthesis methods.

  8. Establishing precision and accuracy in PDV results

    SciTech Connect

    Briggs, Matthew E.; Howard, Marylesa; Diaz, Abel

    2016-04-19

    We need to know uncertainties and systematic errors because we create and compare against archival weapons data, we constrain the models, and we provide scientific results. Good estimates of precision from the data record are available and should be incorporated into existing results; reanalysis of valuable data is suggested. Estimates of systematic errors are largely absent. The original work by Jensen et al. using gun shots for window corrections, and the integrated velocity comparison with X-rays by Schultz are two examples where any systematic errors appear to be <1% level.

  9. Numerical Considerations for Lagrangian Stochastic Dispersion Models: Eliminating Rogue Trajectories, and the Importance of Numerical Accuracy

    NASA Astrophysics Data System (ADS)

    Bailey, Brian N.

    2017-01-01

    When Lagrangian stochastic models for turbulent dispersion are applied to complex atmospheric flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behaviour in the numerical solution. Here we discuss numerical strategies for solving the non-linear Langevin-based particle velocity evolution equation that eliminate such unphysical behaviour in both Reynolds-averaged and large-eddy simulation applications. Extremely large or `rogue' particle velocities are caused when the numerical integration scheme becomes unstable. Such instabilities can be eliminated by using a sufficiently small integration timestep, or in cases where the required timestep is unrealistically small, an unconditionally stable implicit integration scheme can be used. When the generalized anisotropic turbulence model is used, it is critical that the input velocity covariance tensor be realizable, otherwise unphysical behaviour can become problematic regardless of the integration scheme or size of the timestep. A method is presented to ensure realizability, and thus eliminate such behaviour. It was also found that the numerical accuracy of the integration scheme determined the degree to which the second law of thermodynamics or `well-mixed condition' was satisfied. Perhaps more importantly, it also determined the degree to which modelled Eulerian particle velocity statistics matched the specified Eulerian distributions (which is the ultimate goal of the numerical solution). It is recommended that future models be verified by not only checking the well-mixed condition, but perhaps more importantly by checking that computed Eulerian statistics match the Eulerian statistics specified as inputs.

  10. Assessment of accuracy of CFD simulations through quantification of a numerical dissipation rate

    NASA Astrophysics Data System (ADS)

    Domaradzki, J. A.; Sun, G.; Xiang, X.; Chen, K. K.

    2016-11-01

    The accuracy of CFD simulations is typically assessed through a time consuming process of multiple runs and comparisons with available benchmark data. We propose that the accuracy can be assessed in the course of actual runs using a simpler method based on a numerical dissipation rate which is computed at each time step for arbitrary sub-domains using only information provided by the code in question (Schranner et al., 2015; Castiglioni and Domaradzki, 2015). Here, the method has been applied to analyze numerical simulation results obtained using OpenFOAM software for a flow around a sphere at Reynolds number of 1000. Different mesh resolutions were used in the simulations. For the coarsest mesh the ratio of the numerical dissipation to the viscous dissipation downstream of the sphere varies from 4.5% immediately behind the sphere to 22% further away. For the finest mesh this ratio varies from 0.4% behind the sphere to 6% further away. The large numerical dissipation in the former case is a direct indicator that the simulation results are inaccurate, e.g., the predicted Strouhal number is 16% lower than the benchmark. Low numerical dissipation in the latter case is an indicator of an acceptable accuracy, with the Strouhal number in the simulations matching the benchmark. Supported by NSF.

  11. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    SciTech Connect

    Bailey, David

    2005-01-25

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  12. Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms

    NASA Astrophysics Data System (ADS)

    Pürrer, Michael; LVC Collaboration

    2016-03-01

    We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.

  13. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  14. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  15. "Certified" Laboratory Practitioners and the Accuracy of Laboratory Test Results.

    ERIC Educational Resources Information Center

    Boe, Gerard P.; Fidler, James R.

    1988-01-01

    An attempt to replicate a study of the accuracy of test results of medical laboratories was unsuccessful. Limitations of the obtained data prevented the research from having satisfactory internal validity, so no formal report was published. External validity of the study was also limited because the systematic random sample of 78 licensed…

  16. Sheet Hydroforming Process Numerical Model Improvement Through Experimental Results Analysis

    NASA Astrophysics Data System (ADS)

    Gabriele, Papadia; Antonio, Del Prete; Alfredo, Anglani

    2010-06-01

    The increasing application of numerical simulation in metal forming field has helped engineers to solve problems one after another to manufacture a qualified formed product reducing the required time [1]. Accurate simulation results are fundamental for the tooling and the product designs. The wide application of numerical simulation is encouraging the development of highly accurate simulation procedures to meet industrial requirements. Many factors can influence the final simulation results and many studies have been carried out about materials [2], yield criteria [3] and plastic deformation [4,5], process parameters [6] and their optimization. In order to develop a reliable hydromechanical deep drawing (HDD) numerical model the authors have been worked out specific activities based on the evaluation of the effective stiffness of the blankholder structure [7]. In this paper after an appropriate tuning phase of the blankholder force distribution, the experimental activity has been taken into account to improve the accuracy of the numerical model. In the first phase, the effective capability of the blankholder structure to transfer the applied load given by hydraulic actuators to the blank has been explored. This phase ended with the definition of an appropriate subdivision of the blankholder active surface in order to take into account the effective pressure map obtained for the given loads configuration. In the second phase the numerical results obtained with the developed subdivision have been compared with the experimental data of the studied model. The numerical model has been then improved, finding the best solution for the blankholder force distribution.

  17. Numerical simulations of catastrophic disruption: Recent results

    NASA Technical Reports Server (NTRS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-01-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  18. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  19. Numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko; Nishimura, Naoki

    2011-12-01

    We perform numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp. We conduct these experiments by measuring Mwp from synthetic seismograms and comparing the resulting values to the moment magnitudes used in the calculation of synthetic seismograms. In the numerical experiments using point sources, we have found that there is a significant dependence of Mwp on focal mechanisms, and that depths phases have a large impact on Mwp estimates, especially for large shallow earthquakes. Numerical experiments using line sources suggest that the effects of source finiteness and rupture propagation on Mwp estimates are on the order of 0.2 magnitude units for vertical fault planes with pure dip-slip mechanisms and 45° dipping fault planes with pure dip-slip (thrust) mechanisms, but that the dependence is small for strike-slip events on a vertical fault plane. Numerical experiments for huge thrust faulting earthquakes on a fault plane with a shallow dip angle suggest that the Mwp estimates do not saturate in the moment magnitude range between 8 and 9, although they are underestimates. Our results are consistent with previous studies that compared Mwp estimates to moment magnitudes calculated from seismic moment tensors obtained by analyses of observed data.

  20. Numerical considerations for Lagrangian stochastic dispersion models: Eliminating rogue trajectories, and the importance of numerical accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...

  1. Numerical Simulation of Micronozzles with Comparison to Experimental Results

    NASA Astrophysics Data System (ADS)

    Thornber, B.; Chesta, E.; Gloth, O.; Brandt, R.; Schwane, R.; Perigo, D.; Smith, P.

    2004-10-01

    A numerical analysis of conical micronozzle flows has been conducted using the commercial software package CFD-RC FASTRAN [13]. The numerical results have been validated by comparison with direct thrust and mass flow measurements recently performed in ESTEC Propulsion Laboratory on Polyflex Space Ltd. 10mN Cold-Gas thrusters in the frame of ESA CryoSat mission. The flow is viscous dominated, with a throat Reynolds number of 5000, and the relatively large length of the nozzle causes boundary layer effects larger than usual for nozzles of this size. This paper discusses in detail the flow physics such as boundary layer growth and structure, and the effects of rarefaction. Furthermore a number of different domain sizes and exit boundary conditions are used to determine the optimum combination of computational time and accuracy.

  2. Anisotropic halo model: implementation and numerical results

    NASA Astrophysics Data System (ADS)

    Sgró, Mario A.; Paz, Dante J.; Merchán, Manuel

    2013-07-01

    In the present work, we extend the classic halo model for the large-scale matter distribution including a triaxial model for the halo profiles and their alignments. In particular, we derive general expressions for the halo-matter cross-correlation function. In addition, by numerical integration, we obtain instances of the cross-correlation function depending on the directions given by halo shape axes. These functions are called anisotropic cross-correlations. With the aim of comparing our theoretical results with the simulations, we compute averaged anisotropic correlations in cones with their symmetry axis along each shape direction of the centre halo. From these comparisons we characterize and quantify the alignment of dark matter haloes on the Λcold dark matter context by means of the presented anisotropic halo model. Since our model requires multidimensional integral computation we implement a Monte Carlo method on GPU hardware which allows us to increase the precision of the results and it improves the performance of the computation.

  3. Examination of Numerical Integration Accuracy and Modeling for GRACE-FO and GRACE-II

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S.

    2012-12-01

    As technological advances throughout the field of satellite geodesy improve the accuracy of satellite measurements, numerical methods and algorithms must be able to keep pace. Currently, the Gravity Recovery and Climate Experiment's (GRACE) dual one-way microwave ranging system can determine changes in inter-satellite range to a precision of a few microns; however, with the advent of laser measurement systems nanometer precision ranging is a realistic possibility. With this increase in measurement accuracy, a reevaluation of the accuracy inherent in the linear multi-step numerical integration methods is necessary. Two areas where this can be a primary concern are the ability of the numerical integration methods to accurately predict the satellite's state in the presence of numerous small accelerations due to operation of the spacecraft attitude control thrusters, and due to small, point-mass anomalies on the surface of the Earth. This study attempts to quantify and minimize these numerical errors in an effort to improve the accuracy of modeling and propagation of these perturbations; helping to provide further insight into the behavior and evolution of the Earth's gravity field from the more capable gravity missions in the future.

  4. The influence of data shape acquisition process and geometric accuracy of the mandible for numerical simulation.

    PubMed

    Relvas, C; Ramos, A; Completo, A; Simões, J A

    2011-08-01

    Computer-aided technologies have allowed new 3D modelling capabilities and engineering analyses based on experimental and numerical simulation. It has enormous potential for product development, such as biomedical instrumentation and implants. However, due to the complex shapes of anatomical structures, the accuracy of these technologies plays an important key role for adequate and accurate finite element analysis (FEA). The objective of this study was to determine the influence of the geometry variability between two digital models of a human model of the mandible. Two different shape acquisition techniques, CT scan and 3D laser scan, were assessed. A total of 130 points were controlled and the deviations between the measured points of the physical and 3D virtual models were assessed. The results of the FEA study showed a relative difference of 20% for the maximum displacement and 10% for the maximum strain between the two geometries.

  5. Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification

    SciTech Connect

    Blottner, F.G.; Lopez, A.R.

    1998-10-01

    This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.

  6. Modeling extracellular electrical stimulation: II. Computational validation and numerical results.

    PubMed

    Tahayori, Bahman; Meffin, Hamish; Dokos, Socrates; Burkitt, Anthony N; Grayden, David B

    2012-12-01

    The validity of approximate equations describing the membrane potential under extracellular electrical stimulation (Meffin et al 2012 J. Neural Eng. 9 065005) is investigated through finite element analysis in this paper. To this end, the finite element method is used to simulate a cylindrical neurite under extracellular stimulation. Laplace's equations with appropriate boundary conditions are solved numerically in three dimensions and the results are compared to the approximate analytic solutions. Simulation results are in agreement with the approximate analytic expressions for longitudinal and transverse modes of stimulation. The range of validity of the equations describing the membrane potential for different values of stimulation and neurite parameters are presented as well. The results indicate that the analytic approach can be used to model extracellular electrical stimulation for realistic physiological parameters with a high level of accuracy.

  7. Testing Numerical Dynamo Models Against Experimental Results

    NASA Astrophysics Data System (ADS)

    Gissinger, C. J.; Fauve, S.; Dormy, E.

    2007-12-01

    Significant progress has been achieved over the past few years in describing the geomagnetic field using computer models for dynamo action. Such models are so far limited to parameter regimes which are very remote from actual values relevant to the Earth core or any liquid metal (the magnetic Prandtl number is always over estimated by a factor at least 104). While existing models successfully reproduce many of the magnetic observations, it is difficult to assert their validity. The recent success of an experimental homogeneous unconstrained dynamo (VKS) provides a new way to investigate dynamo action in turbulent conducting flows, but it also offers a chance to test the validity of exisiting numerical models. We use a code originaly written for the Geodynamo (Parody) and apply it to the experimental configuration. The direct comparison of simulations and experiments is of great interest to test the predictive value of numerical simulations for dynamo action. These turbulent simulations allow us to approach issues which are very relevant for geophysical dynamos, especially the competition between different magnetic modes and the dynamics of reversals.

  8. Results on fibre scrambling for high accuracy radial velocity measurements

    NASA Astrophysics Data System (ADS)

    Avila, Gerardo; Singh, Paul; Chazelas, Bruno

    2010-07-01

    We present in this paper experimental data on fibres and scramblers to increase the photometrical stability of the spectrograph PSF. We have used round, square, octagonal fibres and beam homogenizers. This study is aimed to enhance the accuracy measurements of the radial velocities for ESO ESPRESSO (VLT) and CODEX (E-ELT) instruments.

  9. Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.

    2010-01-01

    Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…

  10. Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems

    NASA Astrophysics Data System (ADS)

    Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.

    2015-05-01

    The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.

  11. The Relationship Between Accuracy of Numerical Magnitude Comparisons and Children's Arithmetic Ability: A Study in Iranian Primary School Children.

    PubMed

    Tavakoli, Hamdollah Manzari

    2016-11-01

    The relationship between children's accuracy during numerical magnitude comparisons and arithmetic ability has been investigated by many researchers. Contradictory results have been reported from these studies due to the use of many different tasks and indices to determine the accuracy of numerical magnitude comparisons. In the light of this inconsistency among measurement techniques, the present study aimed to investigate this relationship among Iranian second grade children (n = 113) using a pre-established test (known as the Numeracy Screener) to measure numerical magnitude comparison accuracy. The results revealed that both the symbolic and non-symbolic items of the Numeracy Screener significantly correlated with arithmetic ability. However, after controlling for the effect of working memory, processing speed, and long-term memory, only performance on symbolic items accounted for the unique variances in children's arithmetic ability. Furthermore, while working memory uniquely contributed to arithmetic ability in one-and two-digit arithmetic problem solving, processing speed uniquely explained only the variance in single-digit arithmetic skills and long-term memory did not contribute to any significant additional variance for one-digit or two-digit arithmetic problem solving.

  12. On the use of Numerical Weather Models for improving SAR geolocation accuracy

    NASA Astrophysics Data System (ADS)

    Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.

    2013-12-01

    Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of

  13. On the accuracy and convergence of implicit numerical integration of finite element generated ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.

    1978-01-01

    A study of accuracy and convergence of linear functional finite element solution to linear parabolic and hyperbolic partial differential equations is presented. A variable-implicit integration procedure is employed for the resultant system of ordinary differential equations. Accuracy and convergence is compared for the consistent and two lumped assembly procedures for the identified initial-value matrix structure. Truncation error estimation is accomplished using Richardson extrapolation.

  14. Theoretical and numerical comparison of 3D numerical schemes for their accuracy with respect to P-wave to S-wave speed ratio

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.; Galis, M.; Chaljub, E.; Chen, X.; Zhang, Z.

    2012-04-01

    Numerical modeling of earthquake ground motion in sedimentary basins and valleys often has to account for the P-wave to S-wave speed ratios (VP/VS) as large as five and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 - the unconsolidated lake sediments in Ciudad de México are a good example. At the same time, accuracy of the numerical schemes with respect to VP/VS has not been sufficiently analyzed. The numerical schemes are often applied without adequate check of the accuracy. We present theoretical analysis and numerical comparison of 18 3D numerical time-domain explicit schemes for modeling seismic motion for their accuracy with the varying VP/VS. The schemes are based on the finite-difference, spectral-element, finite-element and discontinuous-Galerkin methods. All schemes are presented in a unified form. Theoretical analysis compares accuracy of the schemes in terms of local errors in amplitude and vector difference. In addition to the analysis we compare numerically simulated seismograms with exact solutions for canonical configurations. We compare accuracy of the schemes in terms of the local errors, grid dispersion and full wavefield simulations with respect to the structure of the numerical schemes.

  15. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  16. Numerical results for extended field method applications. [thin plates

    NASA Technical Reports Server (NTRS)

    Donaldson, B. K.; Chander, S.

    1973-01-01

    This paper presents the numerical results obtained when a new method of analysis, called the extended field method, was applied to several thin plate problems including one with non-rectangular geometry, and one problem involving both beams and a plate. The numerical results show that the quality of the single plate solutions was satisfactory for all cases except those involving a freely deflecting plate corner. The results for the beam and plate structure were satisfactory even though the structure had a freely deflecting corner.

  17. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.

  18. Forecasting Energy Market Contracts by Ambit Processes: Empirical Study and Numerical Results

    PubMed Central

    Di Persio, Luca; Marchesan, Michele

    2014-01-01

    In the present paper we exploit the theory of ambit processes to develop a model which is able to effectively forecast prices of forward contracts written on the Italian energy market. Both short-term and medium-term scenarios are considered and proper calibration procedures as well as related numerical results are provided showing a high grade of accuracy in the obtained approximations when compared with empirical time series of interest. PMID:27437500

  19. Accuracy of the diffusion equation to describe photon migration through an infinite medium: numerical and experimental investigation.

    PubMed

    Martelli, F; Bassani, M; Alianelli, L; Zangheri, L; Zaccanti, G

    2000-05-01

    The accuracy of results obtained from the diffusion equation (DE) has been investigated for the case of an isotropic point source in a homogeneous, weakly absorbing, infinite medium. The results from the DE have been compared both with numerical solutions of the radiative transfer equation obtained with Monte Carlo (MC) simulations and with cw experimental results. Comparisons showed that for the cw fluence rate, discrepancies are of the same order as statistical fluctuations on MC results (within 1%) when the distance r from the source is > 2/mu(s)', (mu(s)' is the reduced scattering coefficient). For these values of r, discrepancies for the time-resolved fluence rate are of the same order of statistical fluctuations (within 5%) when the time of flight is t > 4t0 with to time of flight for unscattered photons. For shorter times the DE overestimates the fluence discrepancies are larger for larger values of the asymmetry factor. As to the specific intensity, for small values of r the MC results are more forward peaked than expected from the DE, and the forward peak is stronger for photons arriving at short times. We assumed r > 2/mu(s)' and t > 4t0 for the domain of validity of the DE and we determined the requirements for which the simplifying assumptions necessary to obtain the DE, expressed by two inequalities, are fulfilled. Comparisons with cw experimental results showed a good agreement with MC results both at high and at small values of r mu(s)', while the comparison with the DE showed significant discrepancies for small values of r mu(s)'. Using MC results we also investigated the error made on the optical properties of the medium when they are retrieved using the solution of the DE. To obtain accuracy better than 1% from fitting procedures on time-resolved fluence rate data it is necessary to disregard photons with time of flight < 4t0. Also from cw data it is possible to retrieve the optical properties with good accuracy: by using the added absorber

  20. Numerical Stability and Accuracy of Temporally Coupled Multi-Physics Modules in Wind-Turbine CAE Tools

    SciTech Connect

    Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.

    2013-02-01

    In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.

  1. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    PubMed

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  2. On the accuracy of the two-fluid formulation in direct numerical simulation of bubble-laden turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Ferrante, Antonino; Elghobashi, Said

    2007-04-01

    The objective of the present paper is to examine the accuracy of the two-fluid (TF) formulation in direct numerical simulation (DNS) of a microbubble-laden spatially developing turbulent boundary layer over a flat plate by comparing the results with those of the Eulerian-Lagrangian (EL) formulation [A. Ferrante and S. Elghobashi, J. Fluid Mech. 543, 93 (2005); A. Ferrante and S. Elghobashi, J. Fluid Mech. 503, 345 (2004)]. Our results show that DNS with TF (TFDNS) does not reproduce the physical mechanisms responsible for drag reduction observed in the EL results. The reason is that TFDNS does not produce accurate instantaneous local bubble concentration C (x,t) gradients which are responsible for the generation of a positive ⟨∇•U⟩ that is essential for the drag reduction mechanism. The inaccuracy of the TFDNS in computing C (x,t) is due to the invalidity of the bubble-phase continuity equation in regions where the continuum assumption for the bubble-phase breaks down. It is recommended that if the real (experimental or DNS) instantaneous spatial distribution of bubble (or particle) concentration is discontinuous, and if this concentration discontinuity is crucial for the realization of the physical phenomenon of interest, then DNS should use the EL formulation. We propose a Knudsen number criterion for the validity of the two-fluid formulation in DNS of dispersed two-phase flows with strong unsteady preferential concentration.

  3. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  4. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    PubMed

    Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  5. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making

    PubMed Central

    Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519

  6. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  7. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  8. The Relationship Between Accuracy of Numerical Magnitude Comparisons and Children’s Arithmetic Ability: A Study in Iranian Primary School Children

    PubMed Central

    Tavakoli, Hamdollah Manzari

    2016-01-01

    The relationship between children’s accuracy during numerical magnitude comparisons and arithmetic ability has been investigated by many researchers. Contradictory results have been reported from these studies due to the use of many different tasks and indices to determine the accuracy of numerical magnitude comparisons. In the light of this inconsistency among measurement techniques, the present study aimed to investigate this relationship among Iranian second grade children (n = 113) using a pre-established test (known as the Numeracy Screener) to measure numerical magnitude comparison accuracy. The results revealed that both the symbolic and non-symbolic items of the Numeracy Screener significantly correlated with arithmetic ability. However, after controlling for the effect of working memory, processing speed, and long-term memory, only performance on symbolic items accounted for the unique variances in children’s arithmetic ability. Furthermore, while working memory uniquely contributed to arithmetic ability in one-and two-digit arithmetic problem solving, processing speed uniquely explained only the variance in single-digit arithmetic skills and long-term memory did not contribute to any significant additional variance for one-digit or two-digit arithmetic problem solving. PMID:27872667

  9. Research on Numerical Algorithms for the Three Dimensional Navier-Stokes Equations. I. Accuracy, Convergence & Efficiency.

    DTIC Science & Technology

    1979-09-01

    ithm for Computational Fluid Dynamics," Ph.D. Dissertation, Univ. of Tennessee, Report ESM 78-1, 1978. 18. Thames, F. C., Thompson , J . F ., and Mastin...C. W., "Numerical Solution of the Navier-Stokes Equations for Arbitrary Two-Dimensional Air- foils," NASA SP-347, 1975. 19. Thompson , J . F ., Thames...Number of Arbitrary Two-Dimensional Bodies," NASA CR-2729, 1976. 20. Thames, F. C., Thompson , J . F ., Mastin, C. W., and Walker, R. L., "Numerical

  10. Geopositioning accuracy prediction results for registration of imaging and nonimaging sensors using moving objects

    NASA Astrophysics Data System (ADS)

    Taylor, Charles R.; Dolloff, John T.; Lofy, Brian A.; Luker, Steve A.

    2003-08-01

    BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.

  11. Accuracy and stability of positioning in radiosurgery: Long term results of the Gamma Knife system

    SciTech Connect

    Heck, Bernhard; Jess-Hempen, Anja; Kreiner, Hans Juerg; Schoepgens, Hans; Mack, Andreas

    2007-04-15

    The primary aim of this investigation was to determine the long term overall accuracy of an irradiation position of Gamma Knife systems. The mechanical accuracy of the system as well as the overall accuracy of an irradiation position was examined by irradiating radiosensitive films. To measure the mechanical accuracy, the GafChromic registered film was fixed by a special tool at the unit center point (UCP). For overall accuracy the film was mounted inside a phantom at a target position given by a two-dimensional cross. Its position was determined by CT or MRI scans, a treatment was planned to hit this target by use of the standard planning software and the radiation was finally delivered. This procedure is named ''system test'' according to DIN 6875-1 and is equivalent to a treatment simulation. The used GafChromic registered films were evaluated by high resolution densitometric measurements. The Munich Gamma Knife UCP coincided within x;y;z: -0.014{+-}0.09 mm; 0.013{+-}0.09 mm; -0.002{+-}0.06 mm (mean{+-}SD) to the center of dose distribution. There was no trend in the measured data observed over more than ten years. All measured data were within a sphere of 0.2 mm radius. When basing the target definition in the system test on MRI scans, we obtained an overall accuracy of an irradiation position in the x direction of 0.21{+-}0.32 mm and in the y direction 0.15{+-}0.26 mm (mean{+-}SD). When a CT-based target definition was used, we measured distances in x direction 0.06{+-}0.09 mm and in y direction 0.04{+-}0.09 mm (mean{+-}SD), respectively. These results were compared with those obtained with a Gamma Knife equipped with an automatic positioning system (APS) by use of a different phantom. This phantom was found to be slightly less accurate due to its mechanical construction and the soft fixation into the frame. The phantom related position deviation was found to be about {+-}0.2 mm, and therefore the measured accuracy of the APS Gamma Knife was evidently less

  12. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  13. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

    NASA Astrophysics Data System (ADS)

    Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

    2016-06-01

    Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

  14. A 3-D numerical study of pinhole diffraction to predict the accuracy of EUV point diffraction interferometry

    SciTech Connect

    Goldberg, K.A. |; Tejnil, E.; Bokor, J. |

    1995-12-01

    A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.

  15. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  16. The effect of accuracy, conservation and filtering on numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Kalnay-Rivas, E.; Hoitsma, D.

    1979-01-01

    Considerations leading to the numerical design of the GLAS fourth-order global atmospheric model are discussed, including changes recently introduced into the model. The computation time and memory requirements for the fourth-order model are similar to those of the present second-order GLAS model with the same 4 deg latitude, 5 deg longitude, and 9 vertical-level resolution. However, the fourth-order model forecast skill is significantly better than that of the current GLAS model, and after three days it is comparable to the 2.5 by 3 deg version of the GLAS model in the sea level pressure maps, and has less phase errors in the 500 mb maps.

  17. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  18. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  19. Numerical Results of 3-D Modeling of Moon Accumulation

    NASA Astrophysics Data System (ADS)

    Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr

    2014-05-01

    For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity

  20. Busted Butte: Achieving the Objectives and Numerical Modeling Results

    SciTech Connect

    W.E. Soll; M. Kearney; P. Stauffer; P. Tseng; H.J. Turin; Z. Lu

    2002-10-07

    The Unsaturated Zone Transport Test (UZTT) at Busted Butte is a mesoscale field/laboratory/modeling investigation designed to address uncertainties associated with flow and transport in the UZ site-process models for Yucca Mountain. The UZTT test facility is located approximately 8 km southeast of the potential Yucca Mountain repository area. The UZTT was designed in two phases, to address five specific objectives in the UZ: the effect of heterogeneities, flow and transport (F&T) behavior at permeability contrast boundaries, migration of colloids , transport models of sorbing tracers, and scaling issues in moving from laboratory scale to field scale. Phase 1A was designed to assess the influence of permeability contrast boundaries in the hydrologic Calico Hills. Visualization of fluorescein movement , mineback rock analyses, and comparison with numerical models demonstrated that F&T are capillary dominated with permeability contrast boundaries distorting the capillary flow. Phase 1B was designed to assess the influence of fractures on F&T and colloid movement. The injector in Phase 1B was located at a fracture, while the collector, 30 cm below, was placed at what was assumed to be the same fracture. Numerical simulations of nonreactive (Br) and reactive (Li) tracers show the experimental data are best explained by a combination of molecular diffusion and advective flux. For Phase 2, a numerical model with homogeneous unit descriptions was able to qualitatively capture the general characteristics of the system. Numerical simulations and field observations revealed a capillary dominated flow field. Although the tracers showed heterogeneity in the test block, simulation using heterogeneous fields did not significantly improve the data fit over homogeneous field simulations. In terms of scaling, simulations of field tracer data indicate a hydraulic conductivity two orders of magnitude higher than measured in the laboratory. Simulations of Li, a weakly sorbing tracer

  1. Results of a remote multiplexer/digitizer unit accuracy and environmental study

    NASA Technical Reports Server (NTRS)

    Wilner, D. O.

    1977-01-01

    A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.

  2. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Dittrich, André; Weinmann, Martin; Hinz, Stefan

    2017-04-01

    In photogrammetry, remote sensing, computer vision and robotics, a topic of major interest is represented by the automatic analysis of 3D point cloud data. This task often relies on the use of geometric features amongst which particularly the ones derived from the eigenvalues of the 3D structure tensor (e.g. the three dimensionality features of linearity, planarity and sphericity) have proven to be descriptive and are therefore commonly involved for classification tasks. Although these geometric features are meanwhile considered as standard, very little attention has been paid to their accuracy and robustness. In this paper, we hence focus on the influence of discretization and noise on the most commonly used geometric features. More specifically, we investigate the accuracy and robustness of the eigenvalues of the 3D structure tensor and also of the features derived from these eigenvalues. Thereby, we provide both analytical and numerical considerations which clearly reveal that certain features are more susceptible to discretization and noise whereas others are more robust.

  3. Improving the trust in results of numerical simulations and scientific data analytics

    SciTech Connect

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan

    2015-04-30

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general

  4. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  5. Numerical Results of Earth's Core Accumulation 3-D Modelling

    NASA Astrophysics Data System (ADS)

    Khachay, Yurie; Anfilogov, Vsevolod

    2013-04-01

    For a long time as a most convenient had been the model of mega impact in which the early forming of the Earth's core and mantle had been the consequence of formed protoplanet collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,3] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone. Only after the increasing of the gravitational radius, the growing area of the future core can save also the silicate envelope fragments. All existing dynamical accumulation models are constructed by using a spherical-symmetrical model. Hence for understanding the further planet evolution it is significant to trace the origin and evolution of heterogeneities, which occur on the planet accumulation stage. In that paper we are modeling distributions of temperature, pressure, velocity of matter flowing in a block of 3D- spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach. The numerical algorithm of the problem solution in

  6. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  7. Numerical calculations of high-altitude differential charging: Preliminary results

    NASA Technical Reports Server (NTRS)

    Laframboise, J. G.; Godard, R.; Prokopenko, S. M. L.

    1979-01-01

    A two dimensional simulation program was constructed in order to obtain theoretical predictions of floating potential distributions on geostationary spacecraft. The geometry was infinite-cylindrical with angle dependence. Effects of finite spacecraft length on sheath potential profiles can be included in an approximate way. The program can treat either steady-state conditions or slowly time-varying situations, involving external time scales much larger than particle transit times. Approximate, locally dependent expressions were used to provide space charge, density profiles, but numerical orbit-following is used to calculate surface currents. Ambient velocity distributions were assumed to be isotropic, beam-like, or some superposition of these.

  8. Development of a numerical simulator of human swallowing using a particle method (Part 2. Evaluation of the accuracy of a swallowing simulation using the 3D MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of this study was to develop and evaluate the accuracy of a three-dimensional (3D) numerical simulator of the swallowing action using the 3D moving particle simulation (MPS) method, which can simulate splashes and rapid changes in the free surfaces of food materials. The 3D numerical simulator of the swallowing action using the MPS method was developed based on accurate organ models, which contains forced transformation by elapsed time. The validity of the simulation results were evaluated qualitatively based on comparisons with videofluorography (VF) images. To evaluate the validity of the simulation results quantitatively, the normalized brightness around the vallecula was used as the evaluation parameter. The positions and configurations of the food bolus during each time step were compared in the simulated and VF images. The simulation results corresponded to the VF images during each time step in the visual evaluations, which suggested that the simulation was qualitatively correct. The normalized brightness of the simulated and VF images corresponded exactly at all time steps. This showed that the simulation results, which contained information on changes in the organs and the food bolus, were numerically correct. Based on these results, the accuracy of this simulator was high and it could be used to study the mechanism of disorders that cause dysphasia. This simulator also calculated the shear rate at a specific point and the timing with Newtonian and non-Newtonian fluids. We think that the information provided by this simulator could be useful for development of food products, medicines, and in rehabilitation facilities.

  9. Numerical computation of the effective-one-body potential q using self-force results

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp; van de Meent, Maarten

    2016-03-01

    The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two nonspinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: a (v ) , d ¯ (v ) , q (v ) . By generalizing the first law of mechanics for (nonspinning) black hole binaries to eccentric orbits, [A. Le Tiec, Phys. Rev. D 92, 084021 (2015).] recently obtained new expressions for d ¯(v ) and q (v ) in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential q (v ) by combining results from two independent numerical self-force codes. We determine q (v ) for inverse binary separations in the range 1 /1200 ≤v ≲1 /6 . Our computation thus provides the first-ever strong-field results for q (v ) . We also obtain d ¯ (v ) in our entire domain to a fractional accuracy of ≳10-8 . We find that our results are compatible with the known post-Newtonian expansions for d ¯(v ) and q (v ) in the weak field, and agree with previous (less accurate) numerical results for d ¯(v ) in the strong field.

  10. The results of the campaign for evaluating sphygmomanometers accuracy and their physical conditions

    PubMed

    Mion; Pierin; Alavarce; Vasconcellos

    2000-01-01

    OBJECTIVE: To evaluate the sphygmomanometers calibration accuracy and the physical conditions of the cuff-bladder, bulb, pump, and valve. METHODS: Sixty hundred and forty five aneroid sphygmomanometers were evaluated, 521 used in private practice and 124 used in hospitals. Aneroid manometers were tested against a properly calibrated mercury manometer and were considered calibrated when the error was < or = 3 mm Hg. The physical conditions of the cuffs-bladder, bulb, pump, and valve were also evaluated. RESULTS: Of the aneroid sphygmomanometers tested, 51% of those used in private practice and 56% of those used in hospitals were found to be not accurately calibrated. Of these, the magnitude of inaccuracy ranged from 4 to 8 mm Hg in 70% and 51% of the devices, respectively. The problems found in the cuffs--bladders, bulbs, pumps, and valves of the private practice and hospital devices were bladder damage (34% vs. 21%, respectively), holes/leaks in the bulbs (22% vs. 4%, respectively), and rubber aging (15% vs. 12%, respectively). Of the devices tested, 72% revealed at least one problem interfering with blood pressure measurement accuracy. CONCLUSION: Most of the manometers evaluated, whether used in private practice or in hospitals, were found to be inaccurate and unreliable, and their use may jeopardize the diagnosis and treatment of arterial hypertension.

  11. The results of the Campaign for evaluating sphygmomanometers accuracy and their physical conditions.

    PubMed

    Mion; Pierin; Alavarce; Vasconcellos

    2000-01-01

    OBJECTIVE: To evaluate the sphygmomanometers calibration accuracy and the physical conditions of the cuff-bladder, bulb, pump, and valve. METHODS: Sixty hundred and forty five aneroid sphygmomanometers were evaluated, 521 used in private practice and 124 used in hospitals. Aneroid manometers were tested against a properly calibrated mercury manometer and were considered calibrated when the error was RESULTS: Of the aneroid sphygmomanometers tested, 51% of those used in private practice and 56% of those used in hospitals were found to be not accurately calibrated. Of these, the magnitude of inaccuracy ranged from 4 to 8mm Hg in 70% and 51% of the devices, respectively. The problems found in the cuffs - bladders, bulbs, pumps, and valves of the private practice and hospital devices were bladder damage (34% vs. 21%, respectively), holes/leaks in the bulbs (22% vs. 4%, respectively), and rubber aging (15% vs. 12%, respectively). Of the devices tested, 72% revealed at least one problem interfering with blood pressure measurement accuracy. CONCLUSION: Most of the manometers evaluated, whether used in private practice or in hospitals, were found to be inaccurate and unreliable, and their use may jeopardize the diagnosis and treatment of arterial hypertension.

  12. Results of 17 Independent Geopositional Accuracy Assessments of Earth Satellite Corporation's GeoCover Landsat Thematic Mapper Imagery. Geopositional Accuracy Validation of Orthorectified Landsat TM Imagery: Northeast Asia

    NASA Technical Reports Server (NTRS)

    Smith, Charles M.

    2003-01-01

    This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.

  13. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

  14. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  15. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

    SciTech Connect

    Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej

    2010-03-09

    the theoretical and measurement results for all cases considered has verified the validity and accuracy of our numerical model. Quantitative analysis of the obtained results enabled to find how the ultrasound-induced temperature rises in the rat liver could be controlled by adjusting the source parameters and exposure time.

  16. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

    NASA Astrophysics Data System (ADS)

    Kujawska, Tamara; Wójcik, Janusz; Nowicki, Andrzej

    2010-03-01

    theoretical and measurement results for all cases considered has verified the validity and accuracy of our numerical model. Quantitative analysis of the obtained results enabled to find how the ultrasound-induced temperature rises in the rat liver could be controlled by adjusting the source parameters and exposure time.

  17. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-08-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  18. Numerical wave modelling for seismo-acoustic noise sources: wave model accuracy issues and evidence for variable seismic attenuation

    NASA Astrophysics Data System (ADS)

    Ardhuin, F.; Lavanant, T.; Obrebski, M. J.; Marié, L.; Royer, J.

    2012-12-01

    Nonlinear wave-wave interactions generate noise that numerical ocean wave models may simulate. The accuracy of the noise source predicted by the theory of Longuet-Higgins (1950) and Hasselmann (1963) depends on the realism of the directional wave distribution, which is generally not very well known. Numerical noise models developed by Kedar et al. (2008) and Ardhuin et al. (2010) also suffer from poorly known seismic wave propagation and attenuation properties. Here, several seismic and ocean pressure records are used here to assess the effects of wave modelling errors on the magnitude of noise sources. Measurements within 200~m from the sea surface are dominated by acoustic-gravity modes, for which bottom effects are negligible. These data show that directional wave spectra are well enough reproduced to estimate seismo-acoustic noise sources at frequencies below 0.3~Hz, whith an underestimation of the noise level by about 50%. In larger water depths, the comparison of a numerical noise model with hydrophone records from two open-ocean sites near Hawaii and Kerguelen islands reveal that a) deep ocean acoustic noise at frequencies 0.1 to 1 Hz is consistent with the Rayleigh wave theory, and is well predicted up to 0.4~Hz. b) In particular, evidence of the vertical modes expected theoretically is given by the local maxima in the noise spectrum. c) noise above 0.6 Hz is not well modeled probably due to a poor estimate of the directional properties of high frequency wind-waves, d) the noise level is strongly influenced by bottom properties, in particular the presence of sediments. Further, for continental coastal seismic stations, an accurate model of noise level variability near the noise spectral peak requires an accurate modelling of coastal reflection (Ardhuin and Roland JGR 2012). In cases where noise sources are confined to a small area (e.g. Obrebski et al. GRL 2012), the source amplitude may be factored out, allowing an estimate of seismic attenuation rates

  19. How well do people recall risk factor test results? Accuracy and bias among cholesterol screening participants.

    PubMed

    Croyle, Robert T; Loftus, Elizabeth F; Barger, Steven D; Sun, Yi-Chun; Hart, Marybeth; Gettig, JoAnn

    2006-05-01

    The authors conducted a community-based cholesterol screening study to examine accuracy of recall for self-relevant health information in long-term autobiographical memory. Adult community residents (N = 496) were recruited to participate in a laboratory-based cholesterol screening and were also provided cholesterol counseling in accordance with national guidelines. Participants were subsequently interviewed 1, 3, or 6 months later to assess their memory for their test results. Participants recalled their exact cholesterol levels inaccurately (38.0% correct) but their cardiovascular risk category comparatively well (88.7% correct). Recall errors showed a systematic bias: Individuals who received the most undesirable test results were most likely to remember their cholesterol scores and cardiovascular risk categories as lower (i.e., healthier) than those actually received. Recall bias was unrelated to age, education, knowledge, self-rated health status, and self-reported efforts to reduce cholesterol. The findings provide evidence that recall of self-relevant health information is susceptible to self-enhancement bias.

  20. Aeolian Simulations: A Comparison of Numerical and Experimental Results

    NASA Astrophysics Data System (ADS)

    Mathews, O.; Burr, D. M.; Bridges, N. T.; Lyne, J. E.; Marshall, J. R.; Greeley, R.; White, B. R.; Hills, J.; Smith, K.; Prissel, T. C.; Aliaga-Caro, J. F.

    2010-12-01

    Aeolian processes are a major geomorphic agent on solid planetary bodies with atmospheres (Earth, Mars, Venus, and Titan). This paper describes preliminary efforts to model aeolian saltation using computational fluid dynamics (CFD) and to compare the results with those obtained in wind tunnel testing conducted in the Planetary Aeolian Laboratory at NASA Ames Research Center at ambient pressure. The end goal of the project is to develop an experimentally validated CFD approach for modeling aeolian sediment transport on Titan and other planetary bodies. The MARSWIT open-circuit tunnel in this work was specifically designed for atmospheric boundary layer studies. It is a variable-speed, continuous flow tunnel with a test section 1.0 m by 1.2 m in size; the tunnel is able to operate at pressures from 10 millibar to one atmosphere. Flow trips near the tunnel inlet ensure a fully developed, turbulent boundary layer in the test section. Wind speed and axial velocity profiles can be measured with a traversing pitot tube. In this study, sieved walnut shell particles (Greeley et al. 1976) with a density of ~1.1 g/cm3 were used to correlate the low gravity conditions and low sediment density on a body of interest to that of Earth. This sediment was placed in the tunnel, and the freestream airspeed raised to 5.4 m/s. A Phantom v12 camera imaged the resulting particle motion at 1000 frames per second, which was analyzed with ImageJ open-source software (Fig. 1). Airflow in the tunnel was modeled with FLUENT, a commercial CFD program. The turbulent scheme used in FLUENT to obtain closed-form solutions to the Navier-Stokes equations was a 1st Order, k-epsilon model. These methods produced computational velocity profiles that agree with experimental data to within 5-10%. Once modeling of the flow field had been achieved, a Euler-Lagrangian scheme was employed, treating the particles as spheres and tracking each particle at its center. The particles are assumed to interact with

  1. Sediment Pathways Across Trench Slopes: Results From Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Cormier, M. H.; Seeber, L.; McHugh, C. M.; Fujiwara, T.; Kanamatsu, T.; King, J. W.

    2015-12-01

    Until the 2011 Mw9.0 Tohoku earthquake, the role of earthquakes as agents of sediment dispersal and deposition at erosional trenches was largely under-appreciated. A series of cruises carried out after the 2011 event has revealed a variety of unsuspected sediment transport mechanisms, such as tsunami-triggered sheet turbidites, suggesting that great earthquakes may in fact be important agents for dispersing sediments across trench slopes. To complement these observational data, we have modeled the pathways of sediments across the trench slope based on bathymetric grids. Our approach assumes that transport direction is controlled by slope azimuth only, and ignores obstacles smaller than 0.6-1 km; these constraints are meant to approximate the behavior of turbidites. Results indicate that (1) most pathways issued from the upper slope terminate near the top of the small frontal wedge, and thus do not reach the trench axis; (2) in turn, sediments transported to the trench axis are likely derived from the small frontal wedge or from the subducting Pacific plate. These results are consistent with the stratigraphy imaged in seismic profiles, which reveals that the slope apron does not extend as far as the frontal wedge, and that the thickness of sediments at the trench axis is similar to that of the incoming Pacific plate. We further applied this modeling technique to the Cascadia, Nankai, Middle-America, and Sumatra trenches. Where well-defined canyons carve the trench slopes, sediments from the upper slope may routinely reach the trench axis (e.g., off Costa Rica and Cascadia). In turn, slope basins that are isolated from the canyons drainage systems must mainly accumulate locally-derived sediments. Therefore, their turbiditic infill may be diagnostic of seismic activity only - and not from storm or flood activity. If correct, this would make isolated slope basins ideal targets for paleoseismological investigation.

  2. Results of the 2015 Spitzer Exoplanet Data Challenge: Repeatability and Accuracy of Exoplanet Eclipse Depths

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick

    2016-06-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.

  3. First experimental results of very high accuracy centroiding measurements for the neat astrometric mission

    NASA Astrophysics Data System (ADS)

    Crouzier, A.; Malbet, F.; Preis, O.; Henault, F.; Kern, P.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Delboulbé, A.; Behar, E.; Saint-Pe, M.; Dupont, J.; Potin, S.; Cara, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Léger, A.; LeDuigou, J. M.; Shao, M.; Goullioud, R.

    2013-09-01

    NEAT is an astrometric mission proposed to ESA with the objectives of detecting Earth-like exoplanets in the habitable zone of nearby solar-type stars. NEAT requires the capability to measure stellar centroids at the precision of 5e-6 pixel. Current state-of-the-art methods for centroid estimation have reached a precision of about 2e-5 pixel at two times Nyquist sampling, this was shown at the JPL by the VESTA experiment. A metrology system was used to calibrate intra and inter pixel quantum efficiency variations in order to correct pixelation errors. The European part of the NEAT consortium is building a testbed in vacuum in order to achieve 5e-6 pixel precision for the centroid estimation. The goal is to provide a proof of concept for the precision requirement of the NEAT spacecraft. In this paper we present the metrology and the pseudo stellar sources sub-systems, we present a performance model and an error budget of the experiment and we report the present status of the demonstration. Finally we also present our first results: the experiment had its first light in July 2013 and a first set of data was taken in air. The analysis of this first set of data showed that we can already measure the pixel positions with an accuracy of about 1e-4 pixel.

  4. Consolidated numerical temperature/pressure modelling to assess the accuracy of optoacoustic temperature determination during retinal photocoagulation

    NASA Astrophysics Data System (ADS)

    Baade, Alexander; Schlott, Kerstin; Birngruber, Reginald; Brinkmann, Ralf

    2014-02-01

    Retinal photocoagulation is an established treatment for various retinal diseases. The temperature development during a treatment can be monitored by applying short laser pulses in addition to the treatment laser light. The laser pulses induce temperature dependent thermoelastic pressure waves that can be detected at the cornea. When determining the temperature from the detected pressure waves, the static tissue parameters are assumed to be equal to their mean value that can be found in literature everywhere. However, this is unlikely in a treatment, as the tissue parameters vary from one irradiation site to another. In order to investigate the inaccuracies that are introduced by the assumption of ideal conditions, a numerical model was devised to examine the temperature development during the treatment as well as the formation and propagation of the ultrasonic waves. Using the model, it is possible to determine the peak temperature during retinal photocoagulation from the measured signal, and to investigate the behaviour of the temperature profile and the accuracy of the temperature determination under varying conditions such as changes in the irradiation beam profile. It is shown that there is an error of 15% in determining the peak temperature, when the irradiation beam profile changes from a top hat profile to a gaussian profile. Furthermore, the model was extended in order to incorporate the photoacoustic pressure generation and wave propagation. It was shown that for an irradiation pulse duration of 75 ns there is a difference in pressure amplitude of a factor 2 bet between a top hat and a gaussian shaped irradiation profile due to the difference in energy deposition in the fundus layers.

  5. Flow and transport in highly heterogeneous formations: 3. Numerical simulations and comparison with theoretical results

    NASA Astrophysics Data System (ADS)

    Janković, I.; Fiori, A.; Dagan, G.

    2003-09-01

    In parts 1 [, 2003] and 2 [, 2003] a multi-indicator model of heterogeneous formations is devised in order to solve flow and transport in highly heterogeneous formations. The isotropic medium is made up from circular (2-D) or spherical (3-D) inclusions of different conductivities K, submerged in a matrix of effective conductivity. This structure is different from the multi-Gaussian one, even for equal log conductivity distribution and integral scale. A snapshot of a two-dimensional plume in a highly heterogeneous medium of lognormal conductivity distribution shows that the model leads to a complex transport picture. The present study was limited, however, to investigating the statistical moments of ergodic plumes. Two approximate semianalytical solutions, based on a self-consistent model (SC) and on a first-order perturbation in the log conductivity variance (FO), are used in parts 1 and 2 in order to compute the statistical moments of flow and transport variables for a lognormal conductivity pdf. In this paper an efficient and accurate numerical procedure, based on the analytic-element method [, 1989], is used in order to validate the approximate results. The solution satisfies exactly the continuity equation and at high-accuracy the continuity of heads at inclusion boundaries. The dimensionless dependent variables depend on two parameters: the volume fraction n of inclusions in the medium and the log conductivity variance σY2. For inclusions of uniform radius, the largest n was 0.9 (2-D) and 0.7 (3-D), whereas the largest σY2 was equal to 10. The SC approximation underestimates the longitudinal Eulerian velocity variance for increasing n and increasing σY2 in 2-D and, to a lesser extent, in 3-D, as compared to numerical results. The FO approximation overestimates these variances, and these effects are larger in the transverse direction. The longitudinal velocity pdf is highly skewed and negative velocities are present at high σY2, especially in 2-D. The main

  6. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases.

  7. Speed and Accuracy of Absolute Pitch Judgments: Some Latter-Day Results.

    ERIC Educational Resources Information Center

    Carroll, John B.

    Nine subjects, 5 of whom claimed absolute pitch (AP) ability were instructed to rapidly strike notes on the piano to match randomized tape-recorded piano notes. Stimulus set sizes were 64, 16, or 4 consecutive semitones, or 7 diatonic notes of a designated octave. A control task involved motor movements to notes announced in advance. Accuracy,…

  8. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  9. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  10. Some Numerical Results of Multipoints Bomndary Value Problems Arise in Environmental Protection

    NASA Astrophysics Data System (ADS)

    Pop, Daniel N.

    2016-12-01

    In this paper, we investigate two problems arise in pollutant transport in rivers, and we give some numerical results to approximate this solutions. We determined the approximate solutions using two numerical methods: 1. B-splines combined with Runge-Kutta methods, 2. BVP4C solver of MATLAB and then we compare the run-times.

  11. Effects of heterogeneity in aquifer permeability and biomass on biodegradation rate calculations - Results from numerical simulations

    USGS Publications Warehouse

    Scholl, M.A.

    2000-01-01

    Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence

  12. Numerical results using the conforming VEM for the convection-diffusion-reaction equation with variable coefficients.

    SciTech Connect

    Manzini, Gianmarco; Cangiani, Andrea; Sutton, Oliver

    2014-10-02

    This document presents the results of a set of preliminary numerical experiments using several possible conforming virtual element approximations of the convection-reaction-diffusion equation with variable coefficients.

  13. Thermodiffusion in concentrated ferrofluids: Experimental and numerical results on magnetic thermodiffusion

    SciTech Connect

    Sprenger, Lisa Lange, Adrian; Odenbach, Stefan

    2014-02-15

    Ferrofluids consist of magnetic nanoparticles dispersed in a carrier liquid. Their strong thermodiffusive behaviour, characterised by the Soret coefficient, coupled with the dependency of the fluid's parameters on magnetic fields is dealt with in this work. It is known from former experimental investigations on the one hand that the Soret coefficient itself is magnetic field dependent and on the other hand that the accuracy of the coefficient's experimental determination highly depends on the volume concentration of the fluid. The thermally driven separation of particles and carrier liquid is carried out with a concentrated ferrofluid (φ = 0.087) in a horizontal thermodiffusion cell and is compared to equally detected former measurement data. The temperature gradient (1 K/mm) is applied perpendicular to the separation layer. The magnetic field is either applied parallel or perpendicular to the temperature difference. For three different magnetic field strengths (40 kA/m, 100 kA/m, 320 kA/m) the diffusive separation is detected. It reveals a sign change of the Soret coefficient with rising field strength for both field directions which stands for a change in the direction of motion of the particles. This behaviour contradicts former experimental results with a dilute magnetic fluid, in which a change in the coefficient's sign could only be detected for the parallel setup. An anisotropic behaviour in the current data is measured referring to the intensity of the separation being more intense in the perpendicular position of the magnetic field: S{sub T‖} = −0.152 K{sup −1} and S{sub T⊥} = −0.257 K{sup −1} at H = 320 kA/m. The ferrofluiddynamics-theory (FFD-theory) describes the thermodiffusive processes thermodynamically and a numerical simulation of the fluid's separation depending on the two transport parameters ξ{sub ‖} and ξ{sub ⊥} used within the FFD-theory can be implemented. In the case of a parallel aligned magnetic field, the parameter can

  14. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    NASA Astrophysics Data System (ADS)

    Joachimiak, Damian; Krzyślak, Piotr

    2015-06-01

    Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  15. Exact and numerical results for a dimerized coupled spin- 1/2 chain

    PubMed

    Martins; Nienhuis

    2000-12-04

    We establish exact results for coupled spin-1/2 chains for special values of the four-spin interaction V and dimerization parameter delta. The first exact result is at delta = 1/2 and V = -2. Because we find a very small but finite gap in this dimerized chain, this can serve as a very strong test case for numerical and approximate analytical techniques. The second result is for the homogeneous chain with V = -4 and gives evidence that the system has a spontaneously dimerized ground state. Numerical diagonalization and bosonization techniques indicate that the interplay between dimerization and interaction could result in gapless phases in the regime 0

  16. Role of numerical scheme choice on the results of mathematical modeling of combustion and detonation

    NASA Astrophysics Data System (ADS)

    Yakovenko, I. S.; Kiverin, A. D.; Pinevich, S. G.; Ivanov, M. F.

    2016-11-01

    The present study discusses capabilities of dissipation-free CABARET numerical method application to unsteady reactive gasdynamic flows modeling. In framework of present research the method was adopted for reactive flows governed by real gas equation of state and applied for several typical problems of unsteady gas dynamics and combustion modeling such as ignition and detonation initiation by localized energy sources. Solutions were thoroughly analyzed and compared with that derived by using of the modified Euler-Lagrange method of “coarse” particles. Obtained results allowed us to distinguish range of phenomena where artificial effects of numerical approach may counterfeit their physical nature and to develop guidelines for numerical approach selection appropriate for unsteady reactive gasdynamic flows numerical modeling.

  17. Thematic accuracy of the 1992 National Land-Cover Data for the eastern United States: Statistical methodology and regional results

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.

    2003-01-01

    The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.

  18. Post-glacial landforms dating by lichenometry in Iceland - the accuracy of relative results and conversely

    NASA Astrophysics Data System (ADS)

    Decaulne, Armelle

    2014-05-01

    Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?

  19. Comparison of experimental results with numerical simulations for pulsed thermographic NDE

    NASA Astrophysics Data System (ADS)

    Sripragash, Letchuman; Sundaresan, Mannur

    2017-02-01

    This paper examines pulse thermographic nondestructive evaluation of flat bottom holes of isotropic materials. Different combinations of defect diameters and depths are considered. Thermographic Signal Reconstruction (TSR) method is used to analyze these results. In addition, a new normalization procedure is used to remove the dependence of thermographic results on the material properties and instrumentation settings during these experiments. Hence the normalized results depend only on the geometry of the specimen and the defects. These thermographic NDE procedures were also simulated using finite element technique for a variety of defect configurations. The data obtained from numerical simulations were also processed using the normalization scheme. Excellent agreement was seen between the results obtained from experiments and numerical simulations. Therefore, the scheme is extended to introduce a correlation technique by which numerical simulations are used to quantify the defect parameters.

  20. Accuracy of 4D Flow Measurement of Cerebrospinal Fluid Dynamics in the Cervical Spine: An In Vitro Verification Against Numerical Simulation.

    PubMed

    Heidari Pahlavian, Soroush; Bunck, Alexander C; Thyagaraj, Suraj; Giese, Daniel; Loth, Francis; Hedderich, Dennis M; Kröger, Jan Robert; Martin, Bryn A

    2016-11-01

    Abnormal alterations in cerebrospinal fluid (CSF) flow are thought to play an important role in pathophysiology of various craniospinal disorders such as hydrocephalus and Chiari malformation. Three directional phase contrast MRI (4D Flow) has been proposed as one method for quantification of the CSF dynamics in healthy and disease states, but prior to further implementation of this technique, its accuracy in measuring CSF velocity magnitude and distribution must be evaluated. In this study, an MR-compatible experimental platform was developed based on an anatomically detailed 3D printed model of the cervical subarachnoid space and subject specific flow boundary conditions. Accuracy of 4D Flow measurements was assessed by comparison of CSF velocities obtained within the in vitro model with the numerically predicted velocities calculated from a spatially averaged computational fluid dynamics (CFD) model based on the same geometry and flow boundary conditions. Good agreement was observed between CFD and 4D Flow in terms of spatial distribution and peak magnitude of through-plane velocities with an average difference of 7.5 and 10.6% for peak systolic and diastolic velocities, respectively. Regression analysis showed lower accuracy of 4D Flow measurement at the timeframes corresponding to low CSF flow rate and poor correlation between CFD and 4D Flow in-plane velocities.

  1. Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration

    NASA Technical Reports Server (NTRS)

    Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.

    1987-01-01

    In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.

  2. Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration

    NASA Technical Reports Server (NTRS)

    Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.

    1986-01-01

    In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undersirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.

  3. Exploring vortex dynamics in the presence of dissipation: Analytical and numerical results

    NASA Astrophysics Data System (ADS)

    Yan, D.; Carretero-González, R.; Frantzeskakis, D. J.; Kevrekidis, P. G.; Proukakis, N. P.; Spirn, D.

    2014-04-01

    In this paper, we examine the dynamical properties of vortices in atomic Bose-Einstein condensates in the presence of phenomenological dissipation, used as a basic model for the effect of finite temperatures. In the context of this so-called dissipative Gross-Pitaevskii model, we derive analytical results for the motion of single vortices and, importantly, for vortex dipoles, which have become very relevant experimentally. Our analytical results are shown to compare favorably to the full numerical solution of the dissipative Gross-Pitaevskii equation where appropriate. We also present results on the stability of vortices and vortex dipoles, revealing good agreement between numerical and analytical results for the internal excitation eigenfrequencies, which extends even beyond the regime of validity of this equation for cold atoms.

  4. Accuracy of Korean-Mini-Mental Status Examination Based on Seoul Neuro-Psychological Screening Battery II Results

    PubMed Central

    Kang, In-Woong; Beom, In-Gyu; Cho, Ji-Yeon

    2016-01-01

    Background The Korean-Mini-Mental Status Examination (K-MMSE) is a dementia-screening test that can be easily applied in both community and clinical settings. However, in 20% to 30% of cases, the K-MMSE produces a false negative response. This suggests that it is necessary to evaluate the accuracy of K-MMSE as a screening test for dementia, which can be achieved through comparison of K-MMSE and Seoul Neuropsychological Screening Battery (SNSB)-II results. Methods The study included 713 subjects (male 534, female 179; mean age, 69.3±6.9 years). All subjects were assessed using K-MMSE and SNSB-II tests, the results of which were divided into normal and abnormal in 15 percentile standards. Results The sensitivity of the K-MMSE was 48.7%, with a specificity of 89.9%. The incidence of false positive and negative results totaled 10.1% and 51.2%, respectively. In addition, the positive predictive value of the K-MMSE was 87.1%, while the negative predictive value was 55.6%. The false-negative group showed cognitive impairments in regions of memory and executive function. Subsequently, in the false-positive group, subjects demonstrated reduced performance in memory recall, time orientation, attention, and calculation of K-MMSE items. Conclusion The results obtained in the study suggest that cognitive function might still be impaired even if an individual obtained a normal score on the K-MMSE. If the K-MMSE is combined with tests of memory or executive function, the accuracy of dementia diagnosis could be greatly improved. PMID:27274389

  5. Parametric Evaluation of Absorption Losses and Comparison of Numerical Results to Boeing 707 Aircraft Experimental HIRF Results

    NASA Astrophysics Data System (ADS)

    Kitaygorsky, J.; Amburgey, C.; Elliott, J. R.; Fisher, R.; Perala, R. A.

    A broadband (100 MHz-1.2 GHz) plane wave electric field source was used to evaluate electric field penetration inside a simplified Boeing 707 aircraft model with a finite-difference time-domain (FDTD) method using EMA3D. The role of absorption losses inside the simplified aircraft was investigated. It was found that, in this frequency range, none of the cavities inside the Boeing 707 model are truly reverberant when frequency stirring is applied, and a purely statistical electromagnetics approach cannot be used to predict or analyze the field penetration or shielding effectiveness (SE). Thus it was our goal to attempt to understand the nature of losses in such a quasi-statistical environment by adding various numbers of absorbing objects inside the simplified aircraft and evaluating the SE, decay-time constant τ, and quality factor Q. We then compare our numerical results with experimental results obtained by D. Mark Johnson et al. on a decommissioned Boeing 707 aircraft.

  6. Impulse propagation over a complex site: a comparison of experimental results and numerical predictions.

    PubMed

    Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck

    2014-03-01

    Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement.

  7. Landau-Zener transitions in a dissipative environment: numerically exact results.

    PubMed

    Nalbach, P; Thorwart, M

    2009-11-27

    We study Landau-Zener transitions in a dissipative environment by means of the numerically exact quasiadiabatic propagator path integral. It allows to cover the full range of the involved parameters. We discover a nonmonotonic dependence of the transition probability on the sweep velocity which is explained in terms of a simple phenomenological model. This feature, not captured by perturbative approaches, results from a nontrivial competition between relaxation and the external sweep.

  8. Macroscopic laws for immiscible two-phase flow in porous media: Results From numerical experiments

    NASA Astrophysics Data System (ADS)

    Rothman, Daniel H.

    1990-06-01

    Flow through porous media may be described at either of two length scales. At the scale of a single pore, fluids flow according to the Navier-Stokes equations and the appropriate boundary conditions. At a larger, volume-averaged scale, the flow is usually thought to obey a linear Darcy law relating flow rates to pressure gradients and body forces via phenomenological permeability coefficients. Aside from the value of the permeability coefficient, the slow flow of a single fluid in a porous medium is well-understood within this framework. The situation is considerably different, however, for the simultaneous flow of two or more fluids: not only are the phenomenological coefficients poorly understood, but the form of the macroscopic laws themselves is subject to question. I describe a numerical study of immiscible two-phase flow in an idealized two-dimensional porous medium constructed at the pore scale. Results show that the macroscopic flow is a nonlinear function of the applied forces for sufficiently low levels of forcing, but linear thereafter. The crossover, which is not predicted by conventional models, occurs when viscous forces begin to dominate capillary forces; i.e., at a sufficiently high capillary number. In the linear regime, the flow may be described by the linear phenomenological law ui = ΣjLijfj, where the flow rate ui of the ith fluid is related to the force fj applied to the jth fluid by the matrix of phenomenological coefficients Lij which depends on the relative concentrations of the two fluids. The diagonal terms are proportional to quantities commonly referred to as "relative permeabilities." The cross terms represent viscous coupling between the two fluids; they are conventionally assumed to be negligible and require special experimental procedures to observe in a laboratory. In contrast, in this numerical study the cross terms are straightforward to measure and are found to be of significant size. The cross terms are additionally observed to

  9. A method for data handling numerical results in parallel OpenFOAM simulations

    SciTech Connect

    Anton, Alin; Muntean, Sebastian

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  10. Numerical results of the shape optimization problem for the insulation barrier

    NASA Astrophysics Data System (ADS)

    Salač, Petr

    2016-12-01

    The contribution deals with the numerical results for the shape optimization problem of the system mould, glass piece, plunger, insulation barrier and plunger cavity used in glass forming industry, which was formulated in details at AMEE'15. We used the software FreeFem++ to compute the numerical example for the real vase made from lead crystal glassware of the height 267 [mm] and of the mass 1, 55 [kg]. The plunger and the mould were made from steal, the insulation barrier was made from Murpec with the coefficient of thermal conductivity k = 2, 5 [W/m.K] and the coefficient of heat-transfer between the mould and the environment was chosen to be α = 14 [W/m2.K]. The cooling was implemented by the volume V = 10 [l/min] of water with the temperature 15°C at the entrance and the temperature 100°C at the exit. The results of the numerical optimization to required target temperature 800°C of the outward plunger surface together with the distribution of temperatures on the interface between the plunger and heat source before and after the optimization process are presented.

  11. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

    NASA Astrophysics Data System (ADS)

    Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

    2016-10-01

    We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

  12. Electrostatic modes in dense dusty plasmas with high fugacity: Numerical results

    NASA Astrophysics Data System (ADS)

    Rao, N. N.

    2000-08-01

    The existence of ultra low-frequency wave modes in dusty plasmas has been investigated over a wide range of dust fugacity [defined by f≡4πnd0λD2R, where nd0 is the dust number density, λD is the plasma Debye length, and R is the grain size (radius)] and the grain charging frequency (ω1) by numerically solving the dispersion relation obtained from the kinetic (Vlasov) theory. A detailed comparison between the numerical and the analytical results applicable for the tenuous (low fugacity, f≪1), the dilute (medium fugacity, f˜1), and the dense (high fugacity, f≫1) regimes has been carried out. In the long wavelength limit and for frequencies ω≪ω1, the dispersion curves obtained from the numerical solutions of the real as well as the complex (kinetic) dispersion relations agree, both qualitatively and quantitatively, with the analytical expressions derived from the fluid and the kinetic theories, and are thus identified with the ultra low-frequency electrostatic dust modes, namely, the dust-acoustic wave (DAW), the dust charge-density wave (DCDW) and the dust-Coulomb wave (DCW) discussed earlier [N. N. Rao, Phys. Plasmas 6, 4414 (1999); 7, 795 (2000)]. In particular, the analytical scaling between the phase speeds of the DCWs and the DAWs predicted from theoretical considerations, namely, (ω/k)DCW=(ω/k)DAW/√fδ (where δ is the ratio of the charging frequencies) is in excellent agreement with the numerical results. A simple physical picture of the DCWs has been proposed by defining an effective pressure called "Coulomb pressure" as PC≡nd0qd02/R, where qd0 is the grain surface charge. Accordingly, the DCW dispersion relation is given, in the lowest order, by (ω/k)DCW=√PC/ρdδ , where ρd≡nd0md is the dust mass density. Thus, the DCWs which are driven by the Coulomb pressure can be considered as the electrostatic analogue of the hydromagnetic (Alfvén or magnetoacoustic) waves which are driven by the magnetic field pressure. For the frequency

  13. The diagnostic accuracy of a single CEA blood test in detecting colorectal cancer recurrence: Results from the FACS trial

    PubMed Central

    Nicholson, Brian D.; Primrose, John; Perera, Rafael; James, Timothy; Pugh, Sian; Mant, David

    2017-01-01

    Objective To evaluate the diagnostic accuracy of a single CEA (carcinoembryonic antigen) blood test in detecting colorectal cancer recurrence. Background Patients who have undergone curative resection for primary colorectal cancer are typically followed up with scheduled CEA testing for 5 years. Decisions to investigate further (usually by CT imaging) are based on single test results, reflecting international guidelines. Methods A secondary analysis was undertaken of data from the FACS trial (two arms included CEA testing). The composite reference standard applied included CT-CAP imaging, clinical assessment and colonoscopy. Accuracy in detecting recurrence was evaluated in terms of sensitivity, specificity, likelihood ratios, predictive values, time-dependent area under the ROC curves, and operational performance when used prospectively in clinical practice are reported. Results Of 582 patients, 104 (17.9%) developed recurrence during the 5 year follow-up period. Applying the recommended threshold of 5μg/L achieves at best 50.0% sensitivity (95% CI: 40.1–59.9%); in prospective use in clinical practice it would lead to 56 missed recurrences (53.8%; 95% CI: 44.2–64.4%) and 89 false alarms (56.7% of 157 patients referred for investigation). Applying a lower threshold of 2.5μg/L would reduce the number of missed recurrences to 36.5% (95% CI: 26.5–46.5%) but would increase the false alarms to 84.2% (924/1097 referred). Some patients are more prone to false alarms than others—at the 5μg/L threshold, the 89 episodes of unnecessary investigation were clustered in 29 individuals. Conclusion Our results demonstrated very low sensitivity for CEA, bringing to question whether it could ever be used as an independent triage test. It is not feasible to improve the diagnostic performance of a single test result by reducing the recommended action threshold because of the workload and false alarms generated. Current national and international guidelines merit re

  14. Network model to study physiological processes of hypobaric decompression sickness: New numerical results

    NASA Astrophysics Data System (ADS)

    Zueco, Joaquín; López-González, Luis María

    2016-04-01

    We have studied decompression processes when pressure changes that take place, in blood and tissues using a technical numerical based in electrical analogy of the parameters that involved in the problem. The particular problem analyzed is the behavior dynamics of the extravascular bubbles formed in the intercellular cavities of a hypothetical tissue undergoing decompression. Numerical solutions are given for a system of equations to simulate gas exchanges of bubbles after decompression, with particular attention paid to the effect of bubble size, nitrogen tension, nitrogen diffusivity in the intercellular fluid and in the tissue cell layer in a radial direction, nitrogen solubility, ambient pressure and specific blood flow through the tissue over the different molar diffusion fluxes of nitrogen per time unit (through the bubble surface, between the intercellular fluid layer and blood and between the intercellular fluid layer and the tissue cell layer). The system of nonlinear equations is solved using the Network Simulation Method, where the electric analogy is applied to convert these equations into a network-electrical model, and a computer code (electric circuit simulator, Pspice). In this paper, numerical results new (together to a network model improved with interdisciplinary electrical analogies) are provided.

  15. Some analytical and numerical approaches to understanding trap counts resulting from pest insect immigration.

    PubMed

    Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei

    2015-05-01

    Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth.

  16. Laboratory simulations of lidar returns from clouds - Experimental and numerical results

    NASA Astrophysics Data System (ADS)

    Zaccanti, Giovanni; Bruscaglioni, Piero; Gurioli, Massimo; Sansoni, Paola

    1993-03-01

    The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

  17. Laboratory simulations of lidar returns from clouds: experimental and numerical results.

    PubMed

    Zaccanti, G; Bruscaglioni, P; Gurioli, M; Sansoni, P

    1993-03-20

    The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

  18. Heat Transfer Enhancement for Finned-Tube Heat Exchangers with Vortex Generators: Experimental and Numerical Results

    SciTech Connect

    O'Brien, James Edward; Sohal, Manohar Singh; Huff, George Albert

    2002-08-01

    A combined experimental and numerical investigation is under way to investigate heat transfer enhancement techniques that may be applicable to large-scale air-cooled condensers such as those used in geothermal power applications. The research is focused on whether air-side heat transfer can be improved through the use of finsurface vortex generators (winglets,) while maintaining low heat exchanger pressure drop. A transient heat transfer visualization and measurement technique has been employed in order to obtain detailed distributions of local heat transfer coefficients on model fin surfaces. Pressure drop measurements have also been acquired in a separate multiple-tube row apparatus. In addition, numerical modeling techniques have been developed to allow prediction of local and average heat transfer for these low-Reynolds-number flows with and without winglets. Representative experimental and numerical results presented in this paper reveal quantitative details of local fin-surface heat transfer in the vicinity of a circular tube with a single delta winglet pair downstream of the cylinder. The winglets were triangular (delta) with a 1:2 height/length aspect ratio and a height equal to 90% of the channel height. Overall mean fin-surface Nusselt-number results indicate a significant level of heat transfer enhancement (average enhancement ratio 35%) associated with the deployment of the winglets with oval tubes. Pressure drop measurements have also been obtained for a variety of tube and winglet configurations using a single-channel flow apparatus that includes four tube rows in a staggered array. Comparisons of heat transfer and pressure drop results for the elliptical tube versus a circular tube with and without winglets are provided. Heat transfer and pressure-drop results have been obtained for flow Reynolds numbers based on channel height and mean flow velocity ranging from 700 to 6500.

  19. Simulation of human atherosclerotic femoral plaque tissue: the influence of plaque material model on numerical results

    PubMed Central

    2015-01-01

    Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large

  20. Improved Accuracy of Continuous Glucose Monitoring Systems in Pediatric Patients with Diabetes Mellitus: Results from Two Studies

    PubMed Central

    2016-01-01

    Abstract Objective: This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4® Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2–17 years of age, with diabetes. Research Design and Methods: Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6–17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. Results: In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. Conclusions: The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has

  1. Velocity distribution of meteoroids colliding with planets and satellites. II. Numerical results

    NASA Astrophysics Data System (ADS)

    Kholshevnikov, K. V.; Shor, V. A.

    In the first part of the paper we proposed algorithm for describing velocity distribution of meteoroids colliding with planets and satellites. In the present part we show numerical characteristics of the distribution function. Namely, for each of terrestrial planets and their satellites we consider a swarm of encountering particles of asteroidal origin. They form a field of relative collisional velocities v. We consider momenta k (mathematical expectation of vk), k = -1, 1, 2, 3, 4. The data are calculated under two different assumptions: taking into account gravitation of target body or without it. The main results are presented in a series of tables each containing five numbers and several useful functions of them.

  2. Propagation of CMEs in the interplanetary medium: Numerical and analytical results

    NASA Astrophysics Data System (ADS)

    González-Esparza, J. A.; Cantó, J.; González, R. F.; Lara, A.; Raga, A. C.

    2003-08-01

    We study the propagation of coronal mass ejections (CMES) from near the Sun to 1 AU by comparing results from two different models: a 1-D, hydrodynamic, single-fluid, numerical model (González-Esparza et al., 2003a) and an analytical model to study the dynamical evolution of supersonic velocity's fluctuations at the base of the solar wind applied to the propagation of CMES (Cantó et al., 2002). Both models predict that a fast CME moves initially in the inner heliosphere with a quasi-constant velocity (which has an intermediate value between the initial CME velocity and the ambient solar wind velocity ahead) until a 'critical distance' at which the CME velocity begins to decelerate approaching to the ambient solar wind velocity. This critical distance depends on the characteristics of the CME (initial velocity, density and temperature) as well as of the ambient solar wind. Given typical parameters based on observations, this critical distance can vary from 0.3 to beyond 1 AU from the Sun. These results explain the radial evolution of the velocity of fast CMEs in the inner heliosphere inferred from interplanetary scintillation (IPS) observations (Manoharan et al., 2001, 2003, Tokumaru et al., 2003). On the other hand, the numerical results show that a fast CME and its associated interplanetary (IP) shock follow different heliocentric evolutions: the IP shock always propagates faster than its CME driver and the latter begins to decelerate well before the shock.

  3. Theoretical and numerical results on effects of attenuation on correlation functions of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda

    2013-09-01

    We study analytically and numerically effects of attenuation on cross-correlation functions of ambient noise in a 2-D model with different attenuation constants between and outside a pair of stations. The attenuation is accounted for by quality factor Q(ω) and complex phase velocity. The analytical results are derived for isotropic far-field source distribution assuming the Fresnel approximation and mild attenuation. More general situations including cases with non-isotropic source distributions are examined with numerical simulations. The results show that homogeneous attenuation in the interstation regions produces symmetric amplitude decay of the causal and anticausal parts of the noise cross-correlation function. The attenuation between the receivers and far-field sources generates symmetric exponential amplitude decay and may also cause asymmetric reduction of the causal/anticausal parts that increases with frequency. This frequency dependence can be used to distinguish asymmetric amplitudes due to attenuation from frequency-independent asymmetry in noise correlations generated by non-isotropic source distribution. The attenuations both between and outside station pairs also produce phase shifts that could affect measurements of group and phase velocities. In terms of noise cross-spectra, the interstation attenuation is governed by Struve functions while the attenuation between the far-field sources and receivers is associated with exponential decay and the imaginary part of complex Bessel function. These results are fundamentally different from previous studies of attenuated coherency that append the Bessel function with an exponential decay that depends on the interstation distance.

  4. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  5. Swinging Atwood Machine: Experimental and numerical results, and a theoretical study

    NASA Astrophysics Data System (ADS)

    Pujol, O.; Pérez, J. P.; Ramis, J. P.; Simó, C.; Simon, S.; Weil, J. A.

    2010-06-01

    A Swinging Atwood Machine ( SAM) is built and some experimental results concerning its dynamic behaviour are presented. Experiments clearly show that pulleys play a role in the motion of the pendulum, since they can rotate and have non-negligible radii and masses. Equations of motion must therefore take into account the moment of inertia of the pulleys, as well as the winding of the rope around them. Their influence is compared to previous studies. A preliminary discussion of the role of dissipation is included. The theoretical behaviour of the system with pulleys is illustrated numerically, and the relevance of different parameters is highlighted. Finally, the integrability of the dynamic system is studied, the main result being that the machine with pulleys is non-integrable. The status of the results on integrability of the pulley-less machine is also recalled.

  6. Parameter sampling capabilities of sequential and simultaneous data assimilation: II. Statistical analysis of numerical results

    NASA Astrophysics Data System (ADS)

    Fossum, Kristian; Mannseth, Trond

    2014-11-01

    We assess and compare parameter sampling capabilities of one sequential and one simultaneous Bayesian, ensemble-based, joint state-parameter (JS) estimation method. In the companion paper, part I (Fossum and Mannseth 2014 Inverse Problems 30 114002), analytical investigations lead us to propose three claims, essentially stating that the sequential method can be expected to outperform the simultaneous method for weakly nonlinear forward models. Here, we assess the reliability and robustness of these claims through statistical analysis of results from a range of numerical experiments. Samples generated by the two approximate JS methods are compared to samples from the posterior distribution generated by a Markov chain Monte Carlo method, using four approximate measures of distance between probability distributions. Forward-model nonlinearity is assessed from a stochastic nonlinearity measure allowing for sufficiently large model dimensions. Both toy models (with low computational complexity, and where the nonlinearity is fairly easy to control) and two-phase porous-media flow models (corresponding to down-scaled versions of problems to which the JS methods have been frequently applied recently) are considered in the numerical experiments. Results from the statistical analysis show strong support of all three claims stated in part I.

  7. Noninvasive assessment of mitral inertness: clinical results with numerical model validation

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  8. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  9. MicroRNA-155 Hallmarks Promising Accuracy for the Diagnosis of Various Carcinomas: Results from a Meta-Analysis

    PubMed Central

    Wu, Chuancheng; Liu, Qiuyan; Liu, Baoying

    2015-01-01

    Background. Recent studies have shown that microRNAs (miRNAs) have diagnostic values in various cancers. This meta-analysis seeks to summarize the global diagnostic role of miR-155 in patients with a variety of carcinomas. Methods. Eligible studies were retrieved by searching the online databases, and the bivariate meta-analysis model was employed to generate the summary receiver operator characteristic (SROC) curve. Results. A total of 17 studies dealing with various carcinomas were finally included. The results showed that single miR-155 testing allowed for the discrimination between cancer patients and healthy donors with a sensitivity of 0.82 (95% CI: 0.73–0.88) and specificity of 0.77 (95% CI: 0.70–0.83), corresponding to an area under curve (AUC) of 0.85, while a panel comprising expressions of miR-155 yielded a sensitivity of 0.76 (95% CI: 0.68–0.82) and specificity of 0.82 (95% CI: 0.77–0.86) in diagnosing cancers. The subgroup analysis displayed that serum miR-155 test harvested higher accuracy than plasma-based assay (the AUC, sensitivity, and specificity were, resp., 0.87 versus 0.73, 0.78 versus 0.74, and 0.77 versus 0.70). Conclusions. Our data suggest that single miR-155 profiling has a potential to be used as a screening test for various carcinomas, and parallel testing of miR-155 confers an improved specificity compared to single miR-155 analysis. PMID:25918453

  10. Carbon fiber composites inspection and defect characterization using active infrared thermography: numerical simulations and experimental results.

    PubMed

    Fernandes, Henrique; Zhang, Hai; Figueiredo, Alisson; Ibarra-Castanedo, Clemente; Guimarares, Gilmar; Maldague, Xavier

    2016-12-01

    Composite materials are widely used in the aeronautic industry. One of the reasons is because they have strength and stiffness comparable to metals, with the added advantage of significant weight reduction. Infrared thermography (IT) is a safe nondestructive testing technique that has a fast inspection rate. In active IT, an external heat source is used to stimulate the material being inspected in order to generate a thermal contrast between the feature of interest and the background. In this paper, carbon-fiber-reinforced polymers are inspected using IT. More specifically, carbon/PEEK (polyether ether ketone) laminates with square Kapton inserts of different sizes and at different depths are tested with three different IT techniques: pulsed thermography, vibrothermography, and line scan thermography. The finite element method is used to simulate the pulsed thermography experiment. Numerical results displayed a very good agreement with experimental results.

  11. Asymptotic expansion for stellarator equilibria with a non-planar magnetic axis: Numerical results

    NASA Astrophysics Data System (ADS)

    Freidberg, Jeffrey; Cerfon, Antoine; Parra, Felix

    2012-10-01

    We have recently presented a new asymptotic expansion for stellarator equilibria that generalizes the classic Greene-Johnson expansion [1] to allow for 3D equilibria with a non-planar magnetic axis [2]. Our expansion achieves the two goals of reducing the complexity of the three-dimensional MHD equilibrium equations and of describing equilibria in modern stellarator experiments. The end result of our analysis is a set of two coupled partial differential equations for the plasma pressure and the toroidal vector potential which fully determine the stellarator equilibrium. Both equations are advection equations in which the toroidal angle plays the role of time. We show that the method of characteristics, following magnetic field lines, is a convenient way of solving these equations, avoiding the difficulties associated with the periodicity of the solution in the toroidal angle. By combining the method of characteristics with Green's function integrals for the evaluation of the magnetic field due to the plasma current, we obtain an efficient numerical solver for our expansion. Numerical equilibria thus calculated will be given.[4pt] [1] J.M. Greene and J.L. Johnson, Phys. Fluids 4, 875 (1961)[0pt] [2] A.J. Cerfon, J.P. Freidberg, and F.I. Parra, Bull. Am. Phys. Soc. 56, 16 GP9.00081 (2011)

  12. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

    NASA Astrophysics Data System (ADS)

    Carrano, Charles S.; Rino, Charles L.

    2016-06-01

    We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

  13. Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia

    NASA Astrophysics Data System (ADS)

    Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis

    2014-05-01

    A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.

  14. Experimental and numerical results for CO2 concentration and temperature profiles in an occupied room

    NASA Astrophysics Data System (ADS)

    Cotel, Aline; Junghans, Lars; Wang, Xiaoxiang

    2014-11-01

    In recent years, a recognition of the scope of the negative environmental impact of existing buildings has spurred academic and industrial interest in transforming existing building design practices and disciplinary knowledge. For example, buildings alone consume 72% of the electricity produced annually in the United States; this share is expected to rise to 75% by 2025 (EPA, 2009). Significant reductions in overall building energy consumption can be achieved using green building methods such as natural ventilation. An office was instrumented on campus to acquire CO2 concentrations and temperature profiles at multiple locations while a single occupant was present. Using openFOAM, numerical calculations were performed to allow for comparisons of the CO2 concentration and temperature profiles for different ventilation strategies. Ultimately, these results will be the inputs into a real time feedback control system that can adjust actuators for indoor ventilation and utilize green design strategies. Funded by UM Office of Vice President for Research.

  15. Lateral and axial resolutions of an angle-deviation microscope for different numerical apertures: experimental results

    NASA Astrophysics Data System (ADS)

    Chiu, Ming-Hung; Lai, Chin-Fa; Tan, Chen-Tai; Lin, Yi-Zhi

    2011-03-01

    This paper presents a study of the lateral and axial resolutions of a transmission laser-scanning angle-deviation microscope (TADM) with different numerical aperture (NA) values. The TADM is based on geometric optics and surface plasmon resonance principles. The surface height is proportional to the phase difference between two marginal rays of the test beam, which is passed through the test medium. We used common-path heterodyne interferometry to measure the phase difference in real time, and used a personal computer to calculate and plot the surface profile. The experimental results showed that the best lateral and axial resolutions for NA = 0.41 were 0.5 μm and 3 nm, respectively, and the lateral resolution breaks through the diffraction limits.

  16. Numerical simulation and experimental results of filament wound CFRP tubes tested under biaxial load

    NASA Astrophysics Data System (ADS)

    Amaldi, A.; Giannuzzi, M.; Marchetti, M.; Miliozzi, A.

    1992-10-01

    The analysis of angle ply carbon/epoxy laminated composites when subjected to uniaxial and biaxial stresses is presented. Three classes of interwoven pattern filament wound cylindrical specimens are studied in order to compare the influence of angle on the mechanical behavior of the laminate. Three dimensional finite element and thin shell analyses were first applied to the problem in order to predict global elastic behavior of specimens subjected to uniaxial loads. Different failure criteria were then adopted to investigate specimens' failure and experimental tests were carried out for a comparison with numerical results. Biaxial stress conditions were produced by applying combinations of internal pressure and axial tensile and compressive loads to the specimens.

  17. Dynamics of Tachyon Fields and Inflation - Comparison of Analytical and Numerical Results with Observation

    NASA Astrophysics Data System (ADS)

    Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.

    2016-06-01

    The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.

  18. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  19. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  20. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  1. Ultimate tensile strength of embedded I-sections: a comparison of experimental and numerical results

    NASA Astrophysics Data System (ADS)

    Heristchian, Mahmoud; Pourakbar, Pouyan; Imeni, Saeed; Ramezani, M. Reza Adib

    2014-12-01

    Exposed baseplates together with anchor bolts are the customary method of connection of steel structures to the concrete footings. Post-Kobe studies revealed that the embedded column bases respond better to the earthquake uplift forces. The embedded column bases also, offer higher freedom in achieving the required strength, rigidity and ductility. The paper presents the results of the pullout failure of three embedded IPE140 sections, tested under different conditions. The numerical models are then, generated in Abaqus 6.10-1 software. It is concluded that, the steel profiles could be directly anchored in concrete without using anchor bolts as practiced in the exposed conventional column bases. Such embedded column bases can develop the required resistance against pullout forces at lower constructional costs.

  2. Effects of boundary conditions and partial drainage on cyclic simple shear test results - a numerical study

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Popescu, Radu; Prevost, Jean H.

    2004-08-01

    Owing to imperfect boundary conditions in laboratory soil tests and the possibility of water diffusion inside the soil specimen in undrained tests, the assumption of uniform stress/strain over the sample is not valid. This study presents a qualitative assessment of the effects of non-uniformities in stresses and strains, as well as effects of water diffusion within the soil sample on the global results of undrained cyclic simple shear tests. The possible implications of those phenomena on the results of liquefaction strength assessment are also discussed. A state-of-the-art finite element code for transient analysis of multi-phase systems is used to compare results of the so-called element tests (numerical constitutive experiments assuming uniform stress/strain/pore pressure distribution throughout the sample) with results of actual simulations of undrained cyclic simple shear tests using a finite element mesh and realistic boundary conditions. The finite element simulations are performed under various conditions, covering the entire range of practical situations: (1) perfectly drained soil specimen with constant volume, (2) perfectly undrained specimen, and (3) undrained test with possibility of water diffusion within the sample. The results presented here are restricted to strain-driven tests performed for a loose uniform fine sand with relative density Dr=40%. Effects of system compliance in undrained laboratory simple shear tests are not investigated here. Copyright

  3. Electron Beam Return-Current Losses in Solar Flares: Initial Comparison of Analytical and Numerical Results

    NASA Technical Reports Server (NTRS)

    Holman, Gordon

    2010-01-01

    Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.

  4. Comparison Between Numerical and Experimental Results on Mechanical Stirrer and Bubbling in a Cylindrical Tank - 13047

    SciTech Connect

    Lima da Silva, M.; Sauvage, E.; Brun, P.; Gagnoud, A.; Fautrelle, Y.; Riva, R.

    2013-07-01

    The process of vitrification in a cold crucible heated by direct induction is used in the fusion of oxides. Its feature is the production of high-purity materials. The high-level of purity of the molten is achieved because this melting technique excludes the contamination of the charge by the crucible. The aim of the present paper is to analyze the hydrodynamic of the vitrification process by direct induction, with the focus in the effects associated with the interaction between the mechanical stirrer and bubbling. Considering the complexity of the analyzed system and the goal of the present work, we simplified the system by not taking into account the thermal and electromagnetic phenomena. Based in the concept of hydraulic similitude, we performed an experimental study and a numerical modeling of the simplified model. The results of these two studies were compared and showed a good agreement. The results presented in this paper in conjunction with the previous work contribute to a better understanding of the hydrodynamics effects resulting from the interaction between the mechanical stirrer and air bubbling in the cold crucible heated by direct induction. Further works will take into account thermal and electromagnetic phenomena in the presence of mechanical stirrer and air bubbling. (authors)

  5. Newest Results from the Investigation of Polymer-Induced Drag Reduction through Direct Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Costas D.; Beris, Antony N.; Sureshkumar, R.; Handler, Robert A.

    1998-11-01

    This work continues our attempts to elucidate theoretically the mechanism of polymer-induced drag reduction through direct numerical simulations of turbulent channel flow, using an independently evaluated rheological model for the polymer stress. Using appropriate scaling to accommodate effects due to viscoelasticity reveals that there exists a great consistency in the results for different combinations of the polymer concentration and chain extension. This helps demonstrate that our obervations are applicable to very dilute systems, currently not possible to simulate. It also reinforces the hypothesis that one of the prerequisites for the phenomenon of drag reduction is sufficiently enhanced extensional viscosity, corresponding to the level of intensity and duration of extensional rates typically encountered during the turbulent flow. Moreover, these results motivate a study of the turbulence structure at larger Reynolds numbers and for different periodic computational cell sizes. In addition, the Reynolds stress budgets demonstrate that flow elasticity adversely affects the activities represented by the pressure-strain correlations, leading to a redistribution of turbulent kinetic energy amongst all directions. Finally, we discuss the influence of viscoelasticity in reducing the production of streamwise vorticity.

  6. Experimental and numerical investigations of internal heat transfer in an innovative trailing edge blade cooling system: stationary and rotation effects, part 2: numerical results

    NASA Astrophysics Data System (ADS)

    Beniaiche, Ahmed; Ghenaiet, Adel; Carcasci, Carlo; Facchini, Bruno

    2017-02-01

    This paper presents a numerical validation of the aero-thermal study of a 30:1 scaled model reproducing an innovative trailing edge with one row of enlarged pedestals under stationary and rotating conditions. A CFD analysis was performed by means of commercial ANSYS-Fluent modeling the isothermal air flow and using k- ω SST turbulence model and an isothermal air flow for both static and rotating conditions (Ro up to 0.23). The used numerical model is validated first by comparing the numerical velocity profiles distribution results to those obtained experimentally by means of PIV technique for Re = 20,000 and Ro = 0-0.23. The second validation is based on the comparison of the numerical results of the 2D HTC maps over the heated plate to those of TLC experimental data, for a smooth surface for a Reynolds number = 20,000 and 40,000 and Ro = 0-0.23. Two-tip conditions were considered: open tip and closed tip conditions. Results of the average Nusselt number inside the pedestal ducts region are presented too. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.

  7. Tsunami Hazards along the Eastern Australian Coast from Potential Earthquakes: Results from Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Ding, R. W.; Yuen, D. A.

    2015-08-01

    Australia is surrounded by the Pacific Ocean and the Indian Ocean and, thus, may suffer from tsunamis due to its proximity to the subduction earthquakes around the boundary of Australian Plate. Potential tsunami risks along the eastern coast, where more and more people currently live, are numerically investigated through a scenario-based method to provide an estimation of the tsunami hazard in this region. We have chosen and calculated the tsunami waves generated at the New Hebrides Trench and the Puysegur Trench, and we further investigated the relevant tsunami hazards along the eastern coast and their sensitivities to various sea floor frictions and earthquake parameters (i.e. the strike, the dip and the slip angles and the earthquake magnitude/rupture length). The results indicate that the Puysegur trench possesses a seismic threat causing wave amplitudes over 1.5 m along the coast of Tasmania, Victoria, and New South Wales, and even reaching over 2.6 m at the regions close to Sydney, Maria Island, and Gabo Island for a certain worse case, while the cities along the coast of Queensland are potentially less vulnerable than those on the southeastern Australian coast.

  8. Analysis of formation pressure test results in the Mount Elbert methane hydrate reservoir through numerical simulation

    USGS Publications Warehouse

    Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.

    2011-01-01

    Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.

  9. Numerical Results for a Polytropic Cosmology Interpreted as a Dust Universe Producing Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Klapp, J.; Cervantes-Cota, J.; Chauvet, P.

    1990-11-01

    RESUMEN. A nivel cosmol6gico pensamos que se ha estado prodticiendo radiaci6n gravitacional en cantidades considerables dentro de las galaxias. Si los eventos prodnctores de radiaci6n gravitatoria han venido ocurriendo desde Ia epoca de Ia formaci6n de las galaxias, cuando menos, sus efectos cosmol6gicos pueden ser tomados en cuenta con simplicidad y elegancia al representar la producci6n de radiaci6n y, por consiguiente, su interacci6n con materia ordinaria fenomenol6gicamente a trave's de una ecuaci6n de estado politr6pica, como lo hemos mostrado en otros trabajos. Presentamos en este articulo resultados nunericos de este modelo. ABSTRACT A common believe in cosmology is that gravitational radiation in considerable quantities is being produced within the galaxies. Ifgravitational radiation production has been running since the galaxy formation epoch, at least, its cosmological effects can be assesed with simplicity and elegance by representing the production of radiation and, therefore, its interaction with ordinary matter phenomenologically through a polytropic equation of state as shown already elsewhere. We present in this paper the numerical results of such a model. K words: COSMOLOGY - GRAVITATION

  10. On the accuracy of a video-based drill-guidance solution for orthopedic and trauma surgery: preliminary results

    NASA Astrophysics Data System (ADS)

    Magaraggia, Jessica; Kleinszig, Gerhard; Wei, Wei; Weiten, Markus; Graumann, Rainer; Angelopoulou, Elli; Hornegger, Joachim

    2014-03-01

    Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.

  11. A Hydrodynamic Theory for Spatially Inhomogeneous Semiconductor Lasers. 2; Numerical Results

    NASA Technical Reports Server (NTRS)

    Li, Jianzhong; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)

    2001-01-01

    We present numerical results of the diffusion coefficients (DCs) in the coupled diffusion model derived in the preceding paper for a semiconductor quantum well. These include self and mutual DCs in the general two-component case, as well as density- and temperature-related DCs under the single-component approximation. The results are analyzed from the viewpoint of free Fermi gas theory with many-body effects incorporated. We discuss in detail the dependence of these DCs on densities and temperatures in order to identify different roles played by the free carrier contributions including carrier statistics and carrier-LO phonon scattering, and many-body corrections including bandgap renormalization and electron-hole (e-h) scattering. In the general two-component case, it is found that the self- and mutual- diffusion coefficients are determined mainly by the free carrier contributions, but with significant many-body corrections near the critical density. Carrier-LO phonon scattering is dominant at low density, but e-h scattering becomes important in determining their density dependence above the critical electron density. In the single-component case, it is found that many-body effects suppress the density coefficients but enhance the temperature coefficients. The modification is of the order of 10% and reaches a maximum of over 20% for the density coefficients. Overall, temperature elevation enhances the diffusive capability or DCs of carriers linearly, and such an enhancement grows with density. Finally, the complete dataset of various DCs as functions of carrier densities and temperatures provides necessary ingredients for future applications of the model to various spatially inhomogeneous optoelectronic devices.

  12. Numerical and experimental results on the spectral wave transfer in finite depth

    NASA Astrophysics Data System (ADS)

    Benassai, Guido

    2016-04-01

    Determination of the form of the one-dimensional surface gravity wave spectrum in water of finite depth is important for many scientific and engineering applications. Spectral parameters of deep water and intermediate depth waves serve as input data for the design of all coastal structures and for the description of many coastal processes. Moreover, the wave spectra are given as an input for the response and seakeeping calculations of high speed vessels in extreme sea conditions and for reliable calculations of the amount of energy to be extracted by wave energy converters (WEC). Available data on finite depth spectral form is generally extrapolated from parametric forms applicable in deep water (e.g., JONSWAP) [Hasselmann et al., 1973; Mitsuyasu et al., 1980; Kahma, 1981; Donelan et al., 1992; Zakharov, 2005). The present paper gives a contribution in this field through the validation of the offshore energy spectra transfer from given spectral forms through the measurement of inshore wave heights and spectra. The wave spectra on deep water were recorded offshore Ponza by the Wave Measurement Network (Piscopia et al.,2002). The field regressions between the spectral parameters, fp and the nondimensional energy with the fetch length were evaluated for fetch-limited sea conditions. These regressions gave the values of the spectral parameters for the site of interest. The offshore wave spectra were transfered from the measurement station offshore Ponza to a site located offshore the Gulf of Salerno. The offshore local wave spectra so obtained were transfered on the coastline with the TMA model (Bouws et al., 1985). Finally the numerical results, in terms of significant wave heights, were compared with the wave data recorded by a meteo-oceanographic station owned by Naples Hydrographic Office on the coastline of Salerno in 9m depth. Some considerations about the wave energy to be potentially extracted by Wave Energy Converters were done and the results were discussed.

  13. Numerical results on noise-induced dynamics in the subthreshold regime for thermoacoustic systems

    NASA Astrophysics Data System (ADS)

    Gupta, Vikrant; Saurabh, Aditya; Paschereit, Christian Oliver; Kabiraj, Lipika

    2017-03-01

    Thermoacoustic instability is a serious issue in practical combustion systems. Such systems are inherently noisy, and hence the influence of noise on the dynamics of thermoacoustic instability is an aspect of practical importance. The present work is motivated by a recent report on the experimental observation of coherence resonance, or noise-induced coherence with a resonance-like dependence on the noise intensity as the system approaches the stability margin, for a prototypical premixed laminar flame combustor (Kabiraj et al., Phys. Rev. E, 4 (2015)). We numerically investigate representative thermoacoustic models for such noise-induced dynamics. Similar to the experiments, we study variation in system dynamics in response to variations in the noise intensity and in a critical control parameter as the systems approach their stability margins. The qualitative match identified between experimental results and observations in the representative models investigated here confirms that coherence resonance is a feature of thermoacoustic systems. We also extend the experimental results, which were limited to the case of subcritical Hopf bifurcation, to the case of supercritical Hopf bifurcation. We identify that the phenomenon has qualitative differences for the systems undergoing transition via subcritical and supercritical Hopf bifurcations. Two important practical implications are associated with the findings. Firstly, the increase in noise-induced coherence as the system approaches the onset of thermoacoustic instability can be considered as a precursor to the instability. Secondly, the dependence of noise-induced dynamics on the bifurcation type can be utilised to distinguish between subcritical and supercritical bifurcation prior to the onset of the instability.

  14. Recombination in liquid filled ionisation chambers with multiple charge carrier species: Theoretical and numerical results

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; González-Castaño, D. M.; Gómez, F.; Pardo-Montero, J.

    2014-10-01

    Liquid-filled ionisation chambers (LICs) are used in radiotherapy for dosimetry and quality assurance. Volume recombination can be quite important in LICs for moderate dose rates, causing non-linearities in the dose rate response of these detectors, and needs to be corrected for. This effect is usually described with Greening and Boag models for continuous and pulsed radiation respectively. Such models assume that the charge is carried by two different species, positive and negative ions, each of those species with a given mobility. However, LICs operating in non-ultrapure mode can contain different types of electronegative impurities with different mobilities, thus increasing the number of different charge carriers. If this is the case, Greening and Boag models can be no longer valid and need to be reformulated. In this work we present a theoretical and numerical study of volume recombination in parallel-plate LICs with multiple charge carrier species, extending Boag and Greening models. Results from a recent publication that reported three different mobilities in an isooctane-filled LIC have been used to study the effect of extra carrier species on recombination. We have found that in pulsed beams the inclusion of extra mobilities does not affect volume recombination much, a behaviour that was expected because Boag formula for charge collection efficiency does not depend on the mobilities of the charge carriers if the Debye relationship between mobilities and recombination constant holds. This is not the case in continuous radiation, where the presence of extra charge carrier species significantly affects the amount of volume recombination.

  15. Evaluating the velocity accuracy of an integrated GPS/INS system: Flight test results. [Global positioning system/inertial navigation systems (GPS/INS)

    SciTech Connect

    Owen, T.E.; Wardlaw, R.

    1991-01-01

    Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.

  16. Modeling the Fracturing of Rock by Fluid Injection - Comparison of Numerical and Experimental Results

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Galvan, Boris; Miller, Stephen

    2013-04-01

    Fluid-rock interactions are mechanically fundamental to many earth processes, including fault zones and hydrothermal/volcanic systems, and to future green energy solutions such as enhanced geothermal systems and carbon capture and storage (CCS). Modeling these processes is challenging because of the strong coupling between rock fracture evolution and the consequent large changes in the hydraulic properties of the system. In this talk, we present results of a numerical model that includes poro-elastic plastic rheology (with hardening, softening, and damage), and coupled to a non-linear diffusion model for fluid pressure propagation and two-phase fluid flow. Our plane strain model is based on the poro- elastic plastic behavior of porous rock and is advanced with hardening, softening and damage using the Mohr- Coulomb failure criteria. The effective stress model of Biot (1944) is used for coupling the pore pressure and the rock behavior. Frictional hardening and cohesion softening are introduced following Vermeer and de Borst (1984) with the angle of internal friction and the cohesion as functions of the principal strain rates. The scalar damage coefficient is assumed to be a linear function of the hardening parameter. Fluid injection is modeled as a two phase mixture of water and air using the Richards equation. The theoretical model is solved using finite differences on a staggered grid. The model is benchmarked with experiments on the laboratory scale in which fluid is injected from below in a critically-stressed, dry sandstone (Stanchits et al. 2011). We simulate three experiments, a) the failure a dry specimen due to biaxial compressive loading, b) the propagation a of low pressure fluid front induced from the bottom in a critically stressed specimen, and c) the failure of a critically stressed specimen due to a high pressure fluid intrusion. Comparison of model results with the fluid injection experiments shows that the model captures most of the experimental

  17. Water-waves on linear shear currents. A comparison of experimental and numerical results.

    NASA Astrophysics Data System (ADS)

    Simon, Bruno; Seez, William; Touboul, Julien; Rey, Vincent; Abid, Malek; Kharif, Christian

    2016-04-01

    Propagation of water waves can be described for uniformly sheared current conditions. Indeed, some mathematical simplifications remain applicable in the study of waves whether there is no current or a linearly sheared current. However, the widespread use of mathematical wave theories including shear has rarely been backed by experimental studies of such flows. New experimental and numerical methods were both recently developed to study wave current interactions for constant vorticity. On one hand, the numerical code can simulate, in two dimensions, arbitrary non-linear waves. On the other hand, the experimental methods can be used to generate waves with various shear conditions. Taking advantage of the simplicity of the experimental protocol and versatility of the numerical code, comparisons between experimental and numerical data are discussed and compared with linear theory for validation of the methods. ACKNOWLEDGEMENTS The DGA (Direction Générale de l'Armement, France) is acknowledged for its financial support through the ANR grant N° ANR-13-ASTR-0007.

  18. Chaotic structures of nonlinear magnetic fields. I - Theory. II - Numerical results

    NASA Technical Reports Server (NTRS)

    Lee, Nam C.; Parks, George K.

    1992-01-01

    A study of the evolutionary properties of nonlinear magnetic fields in flowing MHD plasmas is presented to illustrate that nonlinear magnetic fields may involve chaotic dynamics. It is shown how a suitable transformation of the coupled equations leads to Duffing's form, suggesting that the behavior of the general solution can also be chaotic. Numerical solutions of the nonlinear magnetic field equations that have been cast in the form of Duffing's equation are presented.

  19. Preclinical Evaluation of the Accuracy of HIFU Treatments Using a Tumor-Mimic Model. Results of Animal Experiments

    NASA Astrophysics Data System (ADS)

    Melodelima, D.; N'Djin, W. A.; Parmentier, H.; Rivoire, M.; Chapelon, J. Y.

    2009-04-01

    Presented in this paper is a tumor-mimic model that allows the evaluation at a preclinical stage of the targeting accuracy of HIFU treatments in the liver. The tumor-mimics were made by injecting a warm mixture of agarose, cellulose, and glycerol that polymerizes immediately in hepatic tissue and forms a 1 cm discrete lesion that is detectable by ultrasound imaging and gross pathology. Three studies were conducted: (i) in vitro experiments were conducted to study acoustical proprieties of the tumor-mimics, (ii) animal experiments were conducted in ten pigs to evaluate the tolerance of the tumor-mimics at mid-term (30 days), (iii) ultrasound-guided HIFU ablation has been performed in ten pigs with tumor-mimics to demonstrate that it is possible to treat a predetermined zone accurately. The attenuation of tumor-mimics was 0.39 dB.cm-1 at 1 MHz, the ultrasound propagation velocity was 1523 m.s-1, and the acoustic impedance was 1.8 MRayls. The pigs tolerated tumor-mimics and treatment well over the experimental period. Tumor-mimics were visible with high contrast on ultrasound images. In addition, it has been demonstrated by using the tumor-mimic as a reference target, that tissue destruction induced by HIFU and observed on gross pathology corresponded to the targeted area on the ultrasound images. The average difference between the predetermined location of the HIFU ablation and the actual coagulated area was 16%. These tumor-mimics are identifiable by ultrasound imaging, they do not modify the geometry of HIFU lesions and thus constitutes a viable mimic of tumors indicated for HIFU therapy.

  20. Experimental and numerical results on a shear layer excited by a sound pulse

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Bayliss, A.; Turkel, E.

    1979-01-01

    The behavior of a sound in a jet was investigated. It is verified that the far-field acoustic power increased with flow velocity for the lower and medium frequency range. Experimentally, an attenuation at higher frequencies is also observed. This increase is found numerically to be due primarily to the interactions between the mean vorticity and the fluctuation velocities. Spectral decomposition of the real time data indicates that the power increase occurs in the low and middle frequency range, where the local instability waves have the largest spatial growth rate. The connection between this amplification and the local instability waves is discussed.

  1. Ponderomotive stabilization of flute modes in mirrors Feedback control and numerical results

    NASA Technical Reports Server (NTRS)

    Similon, P. L.

    1987-01-01

    Ponderomotive stabilization of rigid plasma flute modes is numerically investigated by use of a variational principle, for a simple geometry, without eikonal approximation. While the near field of the studied antenna can be stabilizing, the far field has a small contribution only, because of large cancellation by quasi mode-coupling terms. The field energy for stabilization is evaluated and is a nonnegligible fraction of the plasma thermal energy. A new antenna design is proposed, and feedback stabilization is investigated. Their use drastically reduces power requirements.

  2. Estimation of geopotential from satellite-to-satellite range rate data: Numerical results

    NASA Technical Reports Server (NTRS)

    Thobe, Glenn E.; Bose, Sam C.

    1987-01-01

    A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.

  3. Nonlinear instability and chaos in plasma wave-wave interactions. II. Numerical methods and results

    SciTech Connect

    Kueny, C.S.; Morrison, P.J.

    1995-05-01

    In Part I of this work and Physics of Plasmas, June 1995, the behavior of linearly stable, integrable systems of waves in a simple plasma model was described using a Hamiltonian formulation. It was shown that explosive instability arises from nonlinear coupling between modes of positive and negative energy, with well-defined threshold amplitudes depending on the physical parameters. In this concluding paper, the nonintegrable case is treated numerically. Several sets of waves are considered, comprising systems of two and three degrees of freedom. The time evolution is modelled with an explicit symplectic integration algorithm derived using Lie algebraic methods. When initial wave amplitudes are large enough to support two-wave decay interactions, strongly chaotic motion destroys the separatrix bounding the stable region for explosive triplets. Phase space orbits then experience diffusive growth to amplitudes that are sufficient for explosive instability, thus effectively reducing the threshold amplitude. For initial amplitudes too small to drive decay instability, small perturbations might still grow to arbitrary size via Arnold diffusion. Numerical experiments do not show diffusion in this case, although the actual diffusion rate is probably underestimated due to the simplicity of the model.

  4. Elasticity of mechanical oscillators in nonequilibrium steady states: Experimental, numerical, and theoretical results

    NASA Astrophysics Data System (ADS)

    Conti, Livia; De Gregorio, Paolo; Bonaldi, Michele; Borrielli, Antonio; Crivellari, Michele; Karapetyan, Gagik; Poli, Charles; Serra, Enrico; Thakur, Ram-Krishna; Rondoni, Lamberto

    2012-06-01

    We study experimentally, numerically, and theoretically the elastic response of mechanical resonators along which the temperature is not uniform, as a consequence of the onset of steady-state thermal gradients. Two experimental setups and designs are employed, both using low-loss materials. In both cases, we monitor the resonance frequencies of specific modes of vibration, as they vary along with variations of temperatures and of temperature differences. In one case, we consider the first longitudinal mode of vibration of an aluminum alloy resonator; in the other case, we consider the antisymmetric torsion modes of a silicon resonator. By defining the average temperature as the volume-weighted mean of the temperatures of the respective elastic sections, we find out that the elastic response of an object depends solely on it, regardless of whether a thermal gradient exists and, up to 10% imbalance, regardless of its magnitude. The numerical model employs a chain of anharmonic oscillators, with first- and second-neighbor interactions and temperature profiles satisfying Fourier's Law to a good degree. Its analysis confirms, for the most part, the experimental findings and it is explained theoretically from a statistical mechanics perspective with a loose notion of local equilibrium.

  5. Interaction of a mantle plume and a segmented mid-ocean ridge: Results from numerical modeling

    NASA Astrophysics Data System (ADS)

    Georgen, Jennifer E.

    2014-04-01

    Previous investigations have proposed that changes in lithospheric thickness across a transform fault, due to the juxtaposition of seafloor of different ages, can impede lateral dispersion of an on-ridge mantle plume. The application of this “transform damming” mechanism has been considered for several plume-ridge systems, including the Reunion hotspot and the Central Indian Ridge, the Amsterdam-St. Paul hotspot and the Southeast Indian Ridge, the Cobb hotspot and the Juan de Fuca Ridge, the Iceland hotspot and the Kolbeinsey Ridge, the Afar plume and the ridges of the Gulf of Aden, and the Marion/Crozet hotspot and the Southwest Indian Ridge. This study explores the geodynamics of the transform damming mechanism using a three-dimensional finite element numerical model. The model solves the coupled steady-state equations for conservation of mass, momentum, and energy, including thermal buoyancy and viscosity that is dependent on pressure and temperature. The plume is introduced as a circular thermal anomaly on the bottom boundary of the numerical domain. The center of the plume conduit is located directly beneath a spreading segment, at a distance of 200 km (measured in the along-axis direction) from a transform offset with length 100 km. Half-spreading rate is 0.5 cm/yr. In a series of numerical experiments, the buoyancy flux of the modeled plume is progressively increased to investigate the effects on the temperature and velocity structure of the upper mantle in the vicinity of the transform. Unlike earlier studies, which suggest that a transform always acts to decrease the along-axis extent of plume signature, these models imply that the effect of a transform on plume dispersion may be complex. Under certain ranges of plume flux modeled in this study, the region of the upper mantle undergoing along-axis flow directed away from the plume could be enhanced by the three-dimensional velocity and temperature structure associated with ridge

  6. The evolution of misoscale circulations in a downburst-producing storm and comparison to numerical results

    NASA Technical Reports Server (NTRS)

    Kessinger, C. J.; Wilson, J. W.; Weisman, M.; Klemp, J.

    1984-01-01

    Data from three NCAR radars are used in both single and dual Doppler analyses to trace the evolution of a June 30, 1982 Colorado convective storm containing downburst-type winds and strong vortices 1-2 km in diameter. The analyses show that a series of small circulations formed along a persistent cyclonic shear boundary; at times as many as three misocyclones were present with vertical vorticity values as large as 0.1/s using a 0.25 km grid interval. The strength of the circulations suggests the possibility of accompanying tornadoes or funnels, although none were observed. Dual-Doppler analyses show that strong, small-scale downdrafts develop in close proximity to the misocyclones. A midlevel mesocyclone formed in the same general region of the storm where the misocylones later developed. The observations are compared with numerical simulations from a three-dimensional cloud model initialized with sounding data from the same day.

  7. The spectroscopic search for the trace aerosols in the planetary atmospheres - the results of numerical simulations

    NASA Astrophysics Data System (ADS)

    Blecka, Maria I.

    2010-05-01

    The passive remote spectrometric methods are important in examinations the atmospheres of planets. The radiance spectra inform us about values of thermodynamical parameters and composition of the atmospheres and surfaces. The spectral technology can be useful in detection of the trace aerosols like biological substances (if present) in the environments of the planets. We discuss here some of the aspects related to the spectroscopic search for the aerosols and dust in planetary atmospheres. Possibility of detection and identifications of biological aerosols with a passive InfraRed spectrometer in an open-air environment is discussed. We present numerically simulated, based on radiative transfer theory, spectroscopic observations of the Earth atmosphere. Laboratory measurements of transmittance of various kinds of aerosols, pollens and bacterias were used in modeling.

  8. Three-Dimensional Numerical Simulations of Equatorial Spread F: Results and Observations in the Pacific Sector

    NASA Technical Reports Server (NTRS)

    Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.

    2012-01-01

    A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.

  9. Estimation of catchment-scale evapotranspiration from baseflow recession data: Numerical model and practical application results

    NASA Astrophysics Data System (ADS)

    Szilagyi, Jozsef; Gribovszki, Zoltan; Kalicz, Peter

    2007-03-01

    SummaryBy applying a nonlinear reservoir approach for groundwater drainage, catchment-scale evapotranspiration (ET) during flow recessions can be expressed with the help of the lumped version of the water balance equation for the catchment. The attractiveness of the approach is that ET, in theory, can be obtained by the sole use of observed flow values for which relatively abundant and long records are available. A 2D finite element numerical model of subsurface flow in the unsaturated and saturated zones, capable of simulating moisture removal by vegetation, was first successfully employed to verify the water balance approach under ideal conditions. Subsequent practical applications over four catchments with widely varying climatic conditions however showed large disparities in comparison with monthly ET estimates of Morton's WREVAP model.

  10. Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results

    NASA Astrophysics Data System (ADS)

    GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.

    2013-03-01

    While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  11. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  12. Sound absorption of porous substrates covered by foliage: experimental results and numerical predictions.

    PubMed

    Ding, Lei; Van Renterghem, Timothy; Botteldooren, Dick; Horoshenkov, Kirill; Khan, Amir

    2013-12-01

    The influence of loose plant leaves on the acoustic absorption of a porous substrate is experimentally and numerically studied. Such systems are typical in vegetative walls, where the substrate has strong acoustical absorbing properties. Both experiments in an impedance tube and theoretical predictions show that when a leaf is placed in front of such a porous substrate, its absorption characteristics markedly change (for normal incident sound). Typically, there is an unaffected change in the low frequency absorption coefficient (below 250 Hz), an increase in the middle frequency absorption coefficient (500-2000 Hz) and a decrease in the absorption at higher frequencies. The influence of leaves becomes most pronounced when the substrate has a low mass density. A combination of the Biot's elastic frame porous model, viscous damping in the leaf boundary layers and plate vibration theory is implemented via a finite-difference time-domain model, which is able to predict accurately the absorption spectrum of a leaf above a porous substrate system. The change in the absorption spectrum caused by the leaf vibration can be modeled reasonably well assuming the leaf and porous substrate properties are uniform.

  13. Mode analysis for a microwave driven plasma discharge: A comparison between analytical and numerical results

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Mussenbrock, Thomas; Brinkmann, Ralf Peter; Zimmermanns, Marc; Rolfes, Ilona; Eremin, Denis; Ruhr-University Bochum, Theoretical Electrical Engineering Team; Ruhr-University Bochum, Institute of Microwave Systems Team

    2015-09-01

    The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for depositing thin films of e.g. silicon oxid on the inner surface of such PET bottles. The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma and the related heating of electrons inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on a numerical approach. We study how modes of guided waves are propagating under different conditions, if at all. The authors gratefully acknowledge the financial support of the German Research Foundation (DFG) within the framework of the collaborative research centre TRR87.

  14. Displacement-Based Seismic Design Procedure for Framed Buildings with Dissipative Braces Part II: Numerical Results

    NASA Astrophysics Data System (ADS)

    Mazza, Fabio; Vulcano, Alfonso

    2008-07-01

    For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses.

  15. Preliminary Results from Numerical Experiments on the Summer 1980 Heat Wave and Drought

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Atlas, R.; Sud, Y. C.

    1985-01-01

    During the summer of 1980, a prolonged heat wave and drought affected the United States. A preliminary set of experiments has been conducted to study the effect of varying boundary conditions on the GLA model simulation of the heat wave. Five 10-day numerical integrations with three different specifications of boundary conditions were carried out: a control experiment which utilized climatological boundary conditions, an SST experiment which utilized summer 1980 sea-surface temperatures in the North Pacific, but climatological values elsewhere, and a Soil Moisture experiment which utilized the values of Mintz-Serafini for the summer, 1980. The starting dates for the five forecasts were 11 June, 7 July, 21 July, 22 August, and 6 September of 1980. These dates were specifically chosen as days when a heat wave was already established in order to investigate the effect of soil moistures or North Pacific sea-surface temperatures on the model's ability to maintain the heat wave pattern. The experiments were evaluated in terms of the heat wave index for the South Plains, North Plains, Great Plains and the entire U.S. In addition a subjective comparison of map patterns has been performed.

  16. Reynolds number effects on shock-wave turbulent boundary-layer interactions - A comparison of numerical and experimental results

    NASA Technical Reports Server (NTRS)

    Horstman, C. C.; Settles, G. S.; Vas, I. E.; Bogdonoff, S. M.; Hung, C. M.

    1977-01-01

    An experiment is described that tests and guides computations of a shock-wave turbulent boundary-layer interaction flow over a 20-deg compression corner at Mach 2.85. Numerical solutions of the time-averaged Navier-Stokes equations for the entire flow field, employing various turbulence models, are compared with the data. Each model is critically evaluated by comparisons with the details of the experimental data. Experimental results for the extent of upstream pressure influence and separation location are compared with numerical predictions for a wide range of Reynolds numbers and shock-wave strengths.

  17. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.

  18. The vertical age profile in sea ice: Theory and numerical results

    NASA Astrophysics Data System (ADS)

    Lietaer, Olivier; Deleersnijder, Eric; Fichefet, Thierry; Vancoppenolle, Martin; Comblen, Richard; Bouillon, Sylvain; Legat, Vincent

    The sea ice age is an interesting diagnostic tool because it may provide a proxy for the sea ice thickness and is easier to infer from observations than the sea ice thickness. Remote sensing algorithms and modeling approaches proposed in the literature indicate significant methodological uncertainties, leading to different ice age values and physical interpretations. In this work, we focus on the vertical age distribution in sea ice. Based on the age theory developed for marine modeling, we propose a vertically-variable sea ice age definition which gives a measure of the time elapsed since the accretion of the ice particle under consideration. An analytical solution is derived from Stefan's law for a horizontally homogeneous ice layer with a periodic ice thickness seasonal cycle. Two numerical methods to solve the age equation are proposed. In the first one, the domain is discretized adaptively in space thanks to Lagrangian particles in order to capture the age profile and its discontinuities. The second one focuses on the mean age of the ice using as few degrees of freedom as possible and is based on an Arbitrary Lagrangian-Eulerian (ALE) spatial discretization and the finite element method. We observe an excellent agreement between the Lagrangian particles and the analytical solution. The mean value and the standard deviation of the finite element solution agree with the analytical solution and a linear approximation is found to represent the age profile the better, the older the ice gets. Both methods are finally applied to a stand-alone thermodynamic sea ice model of the Arctic. Computing the vertically-averaged ice age reduces by a factor of about 2 the simulated ice age compared to the oldest particle of the ice columns. A high correlation is found between the ice thickness and the age of the oldest particle. However, whether or not this will remain valid once ice dynamics is included should be investigated. In addition, the present study, based on

  19. Image restoration by the method of convex projections: part 2 applications and numerical results.

    PubMed

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  20. Multi-Country Experience in Delivering a Joint Course on Software Engineering--Numerical Results

    ERIC Educational Resources Information Center

    Budimac, Zoran; Putnik, Zoran; Ivanovic, Mirjana; Bothe, Klaus; Zdravkova, Katerina; Jakimovski, Boro

    2014-01-01

    A joint course, created as a result of a project under the auspices of the "Stability Pact of South-Eastern Europe" and DAAD, has been conducted in several Balkan countries: in Novi Sad, Serbia, for the last six years in several different forms, in Skopje, FYR of Macedonia, for two years, for several types of students, and in Tirana,…

  1. Results from a limited area mesoscale numerical simulation for 10 April 1979

    NASA Technical Reports Server (NTRS)

    Kalb, M. W.

    1985-01-01

    Results are presented from a nine-hour limited area fine mesh (35-km) mesoscale model simulation initialized with SESAME-AVE I radiosonde data for Apr. 10, 1979 at 2100 GMT. Emphasis is on the diagnosis of mesoscale structure in the mass and precipitation fields. Along the Texas/Oklahoma border, independent of the short wave, convective precipitation formed several hours into the simulation and was organized into a narrow band suggestive of the observed April 10 squall line.

  2. Numerical simulations of soft and hard turbulence - Preliminary results for two-dimensional convection

    NASA Technical Reports Server (NTRS)

    Deluca, E. E.; Werne, J.; Rosner, R.; Cattaneo, F.

    1990-01-01

    Results on the transition from soft to hard turbulence in simulations of two-dimensional Boussinesq convection are reported. The computed probability densities for temperature fluctuations are exponential in form in both soft and hard turbulence, unlike what is observed in experiments. In contrast, a change is obtained in the Nusselt number scaling on Rayleigh number in good agreement with the three-dimensional experiments.

  3. Increased heat transfer to elliptical leading edges due to spanwise variations in the freestream momentum: Numerical and experimental results

    NASA Technical Reports Server (NTRS)

    Rigby, D. L.; Vanfossen, G. J.

    1992-01-01

    A study of the effect of spanwise variation in momentum on leading edge heat transfer is discussed. Numerical and experimental results are presented for both a circular leading edge and a 3:1 elliptical leading edge. Reynolds numbers in the range of 10,000 to 240,000 based on leading edge diameter are investigated. The surface of the body is held at a constant uniform temperature. Numerical and experimental results with and without spanwise variations are presented. Direct comparison of the two-dimensional results, that is, with no spanwise variations, to the analytical results of Frossling is very good. The numerical calculation, which uses the PARC3D code, solves the three-dimensional Navier-Stokes equations, assuming steady laminar flow on the leading edge region. Experimentally, increases in the spanwise-averaged heat transfer coefficient as high as 50 percent above the two-dimensional value were observed. Numerically, the heat transfer coefficient was seen to increase by as much as 25 percent. In general, under the same flow conditions, the circular leading edge produced a higher heat transfer rate than the elliptical leading edge. As a percentage of the respective two-dimensional values, the circular and elliptical leading edges showed similar sensitivity to span wise variations in momentum. By equating the root mean square of the amplitude of the spanwise variation in momentum to the turbulence intensity, a qualitative comparison between the present work and turbulent results was possible. It is shown that increases in leading edge heat transfer due to spanwise variations in freestream momentum are comparable to those due to freestream turbulence.

  4. CRSP, numerical results for an electrical resistivity array to detect underground cavities

    NASA Astrophysics Data System (ADS)

    Amini, Amin; Ramazi, Hamidreza

    2017-01-01

    This paper is devoted to the application of the Combined Resistivity Sounding and Profiling electrode configuration (CRSP) to detect underground cavities. Electrical resistivity surveying is among the most favorite geophysical methods due to its nondestructive and economical properties in a wide range of geosciences. Several types of the electrode arrays are applied to detect different certain objectives. In one hand, the electrode array plays an important role in determination of output resolution and depth of investigations in all resistivity surveys. On the other hand, they have their own merits and demerits in terms of depth of investigations, signal strength, and sensitivity to resistivity variations. In this article several synthetic models, simulating different conditions of cavity occurrence, were used to examine the responses of some conventional electrode arrays and also CRSP array. The results showed that CRSP electrode configuration can detect the desired objectives with a higher resolution rather than some other types of arrays. Also a field case study was discussed in which electrical resistivity approach was conducted in Abshenasan expressway (Tehran, Iran) U-turn bridge site for detecting potential cavities and/or filling loose materials. The results led to detect an aqueduct tunnel passing beneath the study area.

  5. Geometrical optics approach to the nematic liquid crystal grating: numerical results.

    PubMed

    Kosmopoulos, J A; Zenginoglou, H M

    1987-05-01

    The problem of the grating action of a periodically distorted nematic liquid crystal layer, in the geometrical optics ray approximation is considered, and a theory for the calculation of the fringe powers is proposed. A nonabsorbing nematic phase is assumed, and the direction of incidence is taken to be normal to the layer. The powers of the resulting diffraction fringes are related to the spatial and angular deviation of the rays propagating across the layer and to the perturbation of the phase of the wave associated with the ray. The theory is applied to the simple case of a harmonically distorted nematic layer. In the case of a weakly distorted nematic layer the results agree with the predictions of Carroll's model, where only even-order fringes are important. As the distortion becomes larger, odd-order fringes (with the exception of the first order) become equally important, and particularly those at relatively large orders (e.g., seven and nine) exhibit maxima greater than that of the even-order neighbors. Finally, the dependence of the powers of odd-order fringes on the distortion angle is quite different from that of the even-order fringes.

  6. Numerical predictions and experimental results of a dry bay fire environment.

    SciTech Connect

    Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca

    2003-11-01

    The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.

  7. Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Lundgren, Eric

    2006-01-01

    A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.

  8. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

    PubMed

    Wang, Zhan-Shan; Pan, Li-Bo

    2014-03-01

    The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.

  9. Numerical Predictions and Experimental Results of Air Flow in a Smooth Quarter-Scale Nacelle

    SciTech Connect

    BLACK, AMALIA R.; SUO-ANTTILA, JILL M.; GRITZO, LOUIS A.; DISIMILE, PETER J.; TUCKER, JAMES R.

    2002-06-01

    Fires in aircraft engine nacelles must be rapidly suppressed to avoid loss of life and property. The design of new and retrofit suppression systems has become significantly more challenging due to the ban on production of Halon 1301 for environmental concerns. Since fire dynamics and the transport of suppressants within the nacelle are both largely determined by the available air flow, efforts to define systems using less effective suppressants greatly benefit from characterization of nacelle air flow fields. A combined experimental and computational study of nacelle air flow therefore has been initiated. Calculations have been performed using both CFD-ACE (a Computational Fluid Dynamics (CFD) model with a body-fitted coordinate grid) and WLCAN (a CFD-based fire field model with a Cartesian ''brick'' shaped grid). The flow conditions examined in this study correspond to the same Reynolds number as test data from the full-scale nacelle simulator at the 46 Test Wing. Pre-test simulations of a quarter-scale test fixture were performed using CFD-ACE and WLCAN prior to fabrication. Based on these pre-test simulations, a quarter-scale test fixture was designed and fabricated for the purpose of obtaining spatially-resolved measurements of velocity and turbulence intensity in a smooth nacelle. Post-test calculations have been performed for the conditions of the experiment and compared with experimental results obtained from the quarter-scale test fixture. In addition, several different simulations were performed to assess the sensitivity of the predictions to the grid size, to the turbulence models, and to the use of wall functions. In general, the velocity predictions show very good agreement with the data in the center of the channel but deviate near the walls. The turbulence intensity results tend to amplify the differences in velocity, although most of the trends are in agreement. In addition, there were some differences between WLCAN and CFD-ACE results in the angled

  10. Experimental and numerical results for a generic axisymmetric single-engine afterbody with tails at transonic speeds

    NASA Technical Reports Server (NTRS)

    Burley, J. R., II; Carlson, J. R.; Henderson, W. P.

    1986-01-01

    Static pressure measurements were made on the afterbody, nozzle and tails of a generic single-engine axisymmetric fighter configuration. Data were recorded at Mach numbers of 0.6, 0.9, and 1.2. NPR was varied from 1.0 to 8.0 and angle of attack was varied from -3 deg. to 9 deg. Experimental data were compared with numerical results from two state-of-the-art computer codes.

  11. New numerical results and novel effective string predictions for Wilson loops

    NASA Astrophysics Data System (ADS)

    Billó, M.; Caselle, M.; Pellegrini, R.

    2012-01-01

    We compute the prediction of the Nambu-Goto effective string model for a rectangular Wilson loop up to three loops. This is done through the use of an operatorial, first order formulation and of the open string analogues of boundary states. This result is interesting since there are universality theorems stating that the predictions up to three loops are common to all effective string models. To test the effective string prediction, we use a Montecarlo evaluation, in the 3 d Ising gauge model, of an observable (the ratio of two Wilson loops with the same perimeter) for which boundary effects are relatively small. Our simulation attains a level of precision which is sufficient to test the two-loop correction. The three-loop correction seems to go in the right direction, but is actually yet beyond the reach of our simulation, since its effect is comparable with the statistical errors of the latter.

  12. Bathymetry Determination via X-Band Radar Data: A New Strategy and Numerical Results

    PubMed Central

    Serafino, Francesco; Lugni, Claudio; Borge, Jose Carlos Nieto; Zamparelli, Virginia; Soldovieri, Francesco

    2010-01-01

    This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity waves. This estimation technique is based on the correlation between the measured and the theoretical sea wave spectra and a simple analysis of the approach is performed through test cases with synthetic data. More in detail, the reliability of the estimate technique is verified through simulated data sets that are concerned with different values of bathymetry and surface currents for two types of sea spectrum: JONSWAP and Pierson-Moskowitz. The results show how the estimated bathymetry is fairly accurate for low depth values, while the estimate is less accurate as the bathymetry increases, due to a less significant role of the bathymetry on the sea surface waves as the water depth increases. PMID:22163565

  13. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  14. Tsunami hazard assessment in the Ionian Sea due to potential tsunamogenic sources - results from numerical simulations

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Stavrakakis, G.; Sokos, E.; Gkika, F.; Serpetsidaki, A.

    2010-05-01

    In spite of the fact that the great majority of seismic tsunami is generated in ocean domains, smaller basins like the Ionian Sea sometimes experience this phenomenon. In this investigation, we study the tsunami hazard associated with the Ionian Sea fault system. A scenario-based method is used to provide an estimation of the tsunami hazard in this region for the first time. Realistic faulting parameters related to four probable seismic sources, with tsunami potential, are used to model expected coseismic deformation, which is translated directly to the water surface and used as an initial condition for the tsunami propagation. We calculate tsunami propagation snapshots and mareograms for the four seismic sources in order to estimate the expected values of tsunami maximum amplitudes and arrival times at eleven tourist resorts along the Ionian shorelines. The results indicate that, from the four examined sources, only one possesses a seismic threat causing wave amplitudes up to 4 m at some tourist resorts along the Ionian shoreline.

  15. Insight into collision zone dynamics from topography: numerical modelling results and observations

    NASA Astrophysics Data System (ADS)

    Bottrill, A. D.; van Hunen, J.; Allen, M. B.

    2012-07-01

    Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) deepening in the area of the back arc-basin after initial collision. This collisional mantle dynamic basin (CMDB) is caused by slab steepening drawing material away from the base of the overriding plate. Also during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate causes the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. This uplift and subsidence pattern correlates well with our modelled topography changes.

  16. Insight into collision zone dynamics from topography: numerical modelling results and observations

    NASA Astrophysics Data System (ADS)

    Bottrill, A. D.; van Hunen, J.; Allen, M. B.

    2012-11-01

    Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs) basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB) is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene-Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.

  17. Linking stress field deflection to basement structures in southern Ontario: Results from numerical modelling

    NASA Astrophysics Data System (ADS)

    Baird, Alan F.; McKinnon, Stephen D.

    2007-03-01

    Analysis of stress measurement data from the near-surface to crustal depths in southern Ontario show a misalignment between the direction of tectonic loading and the orientation of the major horizontal principal stress. The compressive stress field instead appears to be oriented sub-parallel to the major terrane boundaries such as the Grenville Front, the Central Metasedimentary Belt boundary zone and the Elzevir Frontenac boundary zone. This suggests that the stress field has been modified by these deep crustal scale deformation zones. In order to test this hypothesis, a geomechanical model was constructed using the three-dimensional discontinuum stress analysis code 3DEC. The model consists of a 45 km thick crust of southern Ontario in which the major crustal scale deformation zones are represented as discrete faults. Lateral velocity boundary conditions were applied to the sides of the model in the direction of tectonic loading in order to generate the horizontal compressive stress field. Modelling results show that for low strength (low friction angle and cohesion), fault slip causes the stress field to rotate toward the strike of the faults, consistent with the observed direction of misalignment with the tectonic loading direction. Observed distortions to the regional stress field may be explained by this relatively simple mechanism of slip on deep first-order structures in response to the neotectonic driving forces.

  18. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Ugai, Keizo

    2003-06-01

    This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

  19. Restricted diffusion in a model acinar labyrinth by NMR: Theoretical and numerical results

    NASA Astrophysics Data System (ADS)

    Grebenkov, D. S.; Guillot, G.; Sapoval, B.

    2007-01-01

    A branched geometrical structure of the mammal lungs is known to be crucial for rapid access of oxygen to blood. But an important pulmonary disease like emphysema results in partial destruction of the alveolar tissue and enlargement of the distal airspaces, which may reduce the total oxygen transfer. This effect has been intensively studied during the last decade by MRI of hyperpolarized gases like helium-3. The relation between geometry and signal attenuation remained obscure due to a lack of realistic geometrical model of the acinar morphology. In this paper, we use Monte Carlo simulations of restricted diffusion in a realistic model acinus to compute the signal attenuation in a diffusion-weighted NMR experiment. We demonstrate that this technique should be sensitive to destruction of the branched structure: partial removal of the interalveolar tissue creates loops in the tree-like acinar architecture that enhance diffusive motion and the consequent signal attenuation. The role of the local geometry and related practical applications are discussed.

  20. The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.

    2003-01-01

    We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.

  1. Multi-temperature representation of electron velocity distribution functions. I. Fits to numerical results

    SciTech Connect

    Haji Abolhassani, A. A.; Matte, J.-P.

    2012-10-15

    Electron energy distribution functions are expressed as a sum of 6-12 Maxwellians or a sum of 3, but each multiplied by a finite series of generalized Laguerre polynomials. We fitted several distribution functions obtained from the finite difference Fokker-Planck code 'FPI'[Matte and Virmont, Phys. Rev. Lett. 49, 1936 (1982)] to these forms, by matching the moments, and showed that they can represent very well the coexistence of hot and cold populations, with a temperature ratio as high as 1000. This was performed for two types of problems: (1) the collisional relaxation of a minority hot component in a uniform plasma and (2) electron heat flow down steep temperature gradients, from a hot to a much colder plasma. We find that the multi-Maxwellian representation is particularly good if we accept complex temperatures and coefficients, and it is always better than the representation with generalized Laguerre polynomials for an equal number of moments. For the electron heat flow problem, the method was modified to also fit the first order anisotropy f{sub 1}(x,v,t), again with excellent results. We conclude that this multi-Maxwellian representation can provide a viable alternative to the finite difference speed or energy grid in kinetic codes.

  2. A comparative study between experimental results and numerical predictions of multi-wall structural response to hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Peck, Jeffrey A.

    1992-01-01

    Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.

  3. Numerical Modeling of Anti-icing Systems and Comparison to Test Results on a NACA 0012 Airfoil

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Potapczuk, Mark G.

    1993-01-01

    A series of experimental tests were conducted in the NASA Lewis IRT on an electro-thermally heated NACA 0012 airfoil. Quantitative comparisons between the experimental results and those predicted by a computer simulation code were made to assess the validity of a recently developed anti-icing model. An infrared camera was utilized to scan the instantaneous temperature contours of the skin surface. Despite some experimental difficulties, good agreement between the numerical predictions and the experiment results were generally obtained for the surface temperature and the possibility for each runback to freeze. Some recommendations were given for an efficient operation of a thermal anti-icing system.

  4. Recent Experimental and Numerical Results on Turbulence, Flows and Global Stability Under Biasing in a Magnetized Linear Plasma

    NASA Astrophysics Data System (ADS)

    Gilmore, M.; Desjardins, T. R.; Fisher, D. M.

    2016-10-01

    Ongoing experiments and numerical modeling on the effects of flow shear on electrostatic turbulence in the presence of electrode biasing are being conducted in helicon plasmas in the linear HelCat (Helicon-Cathode) device. It is found that changes in flow shear, affected by electrode biasing through Er x Bz rotation, can strongly affect fluctuation dynamics, including fully suppressing the fluctuations or inducing chaos. The fundamental underlying instability, at least in the case of low magnetic field, is identified as a hybrid resistive drift-Kelvin-Helmholtz mode. At higher magnetic fields, multiple modes (resistive drift, rotation-driven interchange and/or Kelvin-Helmholtz) are present, and interact nonlinearly. At high positive electrode bias (V >10Te), a large amplitude, global instability, identified as the potential relaxation instability is observed. Numerical modeling is also being conducted, using a 3 fluid global Braginskii solver for no or moderate bias cases, and a 1D PIC code for high bias cases. Recent experimental and numerical results will be presented. Supported by U.S. National Science Foundation Award 1500423.

  5. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  6. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties.

    PubMed

    Molinelli, S; Mairani, A; Mirandola, A; Vilches Freixas, G; Tessonnier, T; Giordanengo, S; Parodi, K; Ciocca, M; Orecchia, R

    2013-06-07

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  7. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties

    NASA Astrophysics Data System (ADS)

    Molinelli, S.; Mairani, A.; Mirandola, A.; Vilches Freixas, G.; Tessonnier, T.; Giordanengo, S.; Parodi, K.; Ciocca, M.; Orecchia, R.

    2013-06-01

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  8. Comparison of numerical simulation with experimental result for small scale one seater wing in ground effect (WIG) craft

    NASA Astrophysics Data System (ADS)

    Baharun, A. Tarmizi; Maimun, Adi; Ahmed, Yasser M.; Mobassher, M.; Nakisa, M.

    2015-05-01

    In this paper, three dimensional data and behavior of incompressible and steady air flow around a small scale Wing in Ground Effect Craft (WIG) were investigated and studied numerically then compared to the experimental result and also published data. This computational simulation (CFD) adopted two turbulence models, which were k-ɛ and k-ω in order to determine which model produces minimum difference to the experimental result of the small scale WIG tested in wind tunnel. Unstructured mesh was used in the simulation and data of drag coefficient (Cd) and lift coefficient (Cl) were obtained with angle of attack (AoA) of the WIG model as the parameter. Ansys ICEM was used for the meshing process while Ansys Fluent was used for solution. Aerodynamic forces, Cl, Cd and Cl/Cd along with fluid flow pattern of the small scale WIG craft was shown and discussed.

  9. PINTEX Data: Numeric results from the Polarized Internal Target Experiments (PINTEX) at the Indiana University Cyclotron Facility

    DOE Data Explorer

    Meyer, H. O.

    The PINTEX group studied proton-proton and proton-deuteron scattering and reactions between 100 and 500 MeV at the Indiana University Cyclotron Facility (IUCF). More than a dozen experiments made use of electron-cooled polarized proton or deuteron beams, orbiting in the 'Indiana Cooler' storage ring, and of a polarized atomic-beam target of hydrogen or deuterium in the path of the stored beam. The collaboration involved researchers from several midwestern universities, as well as a number of European institutions. The PINTEX program ended when the Indiana Cooler was shut down in August 2002. The website contains links to some of the numerical results, descriptions of experiments, and a complete list of publications resulting from PINTEX.

  10. Validation and Analysis of Numerical Results for a Two-Pass Trapezoidal Channel With Different Cooling Configurations of Trailing Edge.

    PubMed

    Siddique, Waseem; El-Gabry, Lamyaa; Shevchuk, Igor V; Fransson, Torsten H

    2013-01-01

    High inlet temperatures in a gas turbine lead to an increase in the thermal efficiency of the gas turbine. This results in the requirement of cooling of gas turbine blades/vanes. Internal cooling of the gas turbine blade/vanes with the help of two-pass channels is one of the effective methods to reduce the metal temperatures. In particular, the trailing edge of a turbine vane is a critical area, where effective cooling is required. The trailing edge can be modeled as a trapezoidal channel. This paper describes the numerical validation of the heat transfer and pressure drop in a trapezoidal channel with and without orthogonal ribs at the bottom surface. A new concept of ribbed trailing edge has been introduced in this paper which presents a numerical study of several trailing edge cooling configurations based on the placement of ribs at different walls. The baseline geometries are two-pass trapezoidal channels with and without orthogonal ribs at the bottom surface of the channel. Ribs induce secondary flow which results in enhancement of heat transfer; therefore, for enhancement of heat transfer at the trailing edge, ribs are placed at the trailing edge surface in three different configurations: first without ribs at the bottom surface, then ribs at the trailing edge surface in-line with the ribs at the bottom surface, and finally staggered ribs. Heat transfer and pressure drop is calculated at Reynolds number equal to 9400 for all configurations. Different turbulent models are used for the validation of the numerical results. For the smooth channel low-Re k-ɛ model, realizable k-ɛ model, the RNG k-ω model, low-Re k-ω model, and SST k-ω models are compared, whereas for ribbed channel, low-Re k-ɛ model and SST k-ω models are compared. The results show that the low-Re k-ɛ model, which predicts the heat transfer in outlet pass of the smooth channels with difference of +7%, underpredicts the heat transfer by -17% in case of ribbed channel compared to

  11. Role of the sample thickness on the performance of cholesteric liquid crystal lasers: Experimental, numerical, and analytical results

    NASA Astrophysics Data System (ADS)

    Sanz-Enguita, G.; Ortega, J.; Folcia, C. L.; Aramburu, I.; Etxebarria, J.

    2016-02-01

    We have studied the performance characteristics of a dye-doped cholesteric liquid crystal (CLC) laser as a function of the sample thickness. The study has been carried out both from the experimental and theoretical points of view. The theoretical model is based on the kinetic equations for the population of the excited states of the dye and for the power of light generated within the laser cavity. From the equations, the threshold pump radiation energy Eth and the slope efficiency η are numerically calculated. Eth is rather insensitive to thickness changes, except for small thicknesses. In comparison, η shows a much more pronounced variation, exhibiting a maximum that determines the sample thickness for optimum laser performance. The predictions are in good accordance with the experimental results. Approximate analytical expressions for Eth and η as a function of the physical characteristics of the CLC laser are also proposed. These expressions present an excellent agreement with the numerical calculations. Finally, we comment on the general features of CLC layer and dye that lead to the best laser performance.

  12. Results from a round-robin study assessing the precision and accuracy of LA-ICPMS U/Pb geochronology of zircon

    NASA Astrophysics Data System (ADS)

    Hanchar, J. M.

    2009-12-01

    A round-robin study was undertaken to assess the current state of precision and accuracy that can be achieved in LA-ICPMS U/Pb geochronology of zircon. The initial plan was to select abundant, well-characterized zircon samples to distribute to participants in the study. Three suitable samples were found, evaluated, and dated using ID-TIMS. Twenty-five laboratories in North America and Europe were asked to participate in the study. Eighteen laboratories agreed to participate, of which seventeen submitted final results. It was decided at the outset of the project that the identities of the participating researchers and laboratories not be revealed until the manuscript stemming from the project was completed. Participants were sent either fragments of zircon crystal or whole zircon crystals, selected randomly after being thoroughly mixed. Participants were asked to conform to specific requirements. These include providing all analytical conditions and equipment used, submission of all data acquired, and submitting their preferred data and preferred ages for the three samples. The participating researchers used a wide range of analytical methods (e.g., instrumentation, data reduction, error propagation) for the LA-ICPMS U/Th geochronology. These combined factors made it difficult for direct comparison of the results that were submitted. Most of the LA-ICPMS results submitted were within 2% r.s.d. of the ID-TIMS values for the three samples in the study. However, the error bars for the majority of the LA-ICPMS results for the three samples did not overlap with the ID-TIMS results. These results suggest a general underestimation of the errors calculated for the LA-ICPMS analyses U/Pb zircon analyses.

  13. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  14. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  15. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  16. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  17. Equations of State for Mixtures: Results from DFT Simulations of Xenon/Ethane Mixtures Compared to High Accuracy Validation Experiments on Z

    NASA Astrophysics Data System (ADS)

    Magyar, Rudolph

    2013-06-01

    We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure

    NASA Astrophysics Data System (ADS)

    Hand, J. W.; Li, Y.; Hajnal, J. V.

    2010-02-01

    Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SARMWB <= 2 W kg-1 (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR10g and average foetal temperature are within international safety limits. For continuous RF exposure at SARMWB = 2 W kg-1 over periods of 7.5 min or longer, a maximum local foetal temperature >38 °C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SARMWB = 2 W kg-1, some local SAR10g values in the mother's trunk and extremities exceed recommended limits.

  19. Numerical study of RF exposure and the resulting temperature rise in the foetus during a magnetic resonance procedure.

    PubMed

    Hand, J W; Li, Y; Hajnal, J V

    2010-02-21

    Numerical simulations of specific absorption rate (SAR) and temperature changes in a 26-week pregnant woman model within typical birdcage body coils as used in 1.5 T and 3 T MRI scanners are described. Spatial distributions of SAR and the resulting spatial and temporal changes in temperature are determined using a finite difference time domain method and a finite difference bio-heat transfer solver that accounts for discrete vessels. Heat transfer from foetus to placenta via the umbilical vein and arteries as well as that across the foetal skin/amniotic fluid/uterine wall boundaries is modelled. Results suggest that for procedures compliant with IEC normal mode conditions (maternal whole-body averaged SAR(MWB) < or = 2 W kg(-1) (continuous or time-averaged over 6 min)), whole foetal SAR, local foetal SAR(10 g) and average foetal temperature are within international safety limits. For continuous RF exposure at SAR(MWB) = 2 W kg(-1) over periods of 7.5 min or longer, a maximum local foetal temperature >38 degrees C may occur. However, assessment of the risk posed by such maximum temperatures predicted in a static model is difficult because of frequent foetal movement. Results also confirm that when SAR(MWB) = 2 W kg(-1), some local SAR(10g) values in the mother's trunk and extremities exceed recommended limits.

  20. Optimization of SiO2-TiNxOy-Cu interference absorbers: numerical and experimental results

    NASA Astrophysics Data System (ADS)

    Lazarov, Michel P.; Sizmann, R.; Frei, Ulrich

    1993-10-01

    SiO2 - TiNxOy-Cu absorbers were prepared with activated reactive evaporation (ARE). The deposition parameters for the ARE process were adjusted according to the results of the numerical optimizations by a genetic algorithm. We present spectral reflectance, calorimetric and grazing incidence X-ray reflection (GXR) measurements. Best coatings for applications as selective absorber in the range of T equals 100 (DOT)(DOT)(DOT) 200 degree(s)C exhibit a solar absorptance of 0.94 and a near normal emittance of 0.044 at 100 degree(s)C. This emittance is correlated with the hemispherical emittance of 0.061 obtained from calorimetric measurements at 200 degree(s)C. First results on lifetime studies show that the coatings are thermally stable under vacuum up to 400 degree(s)C. The SiO2 film passivates the absorber, a substantial slow down of degradation in dry air is observed. Our tests demonstrate that the coating will withstand break down in cooling fluid and vacuum if mounted in an evacuated collector.

  1. Experimental results and numerical modeling of a high-performance large-scale cryopump. I. Test particle Monte Carlo simulation

    SciTech Connect

    Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos

    2011-07-15

    For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

  2. The Trichoderma harzianum demon: complex speciation history resulting in coexistence of hypothetical biological species, recent agamospecies and numerous relict lineages

    PubMed Central

    2010-01-01

    Background The mitosporic fungus Trichoderma harzianum (Hypocrea, Ascomycota, Hypocreales, Hypocreaceae) is an ubiquitous species in the environment with some strains commercially exploited for the biological control of plant pathogenic fungi. Although T. harzianum is asexual (or anamorphic), its sexual stage (or teleomorph) has been described as Hypocrea lixii. Since recombination would be an important issue for the efficacy of an agent of the biological control in the field, we investigated the phylogenetic structure of the species. Results Using DNA sequence data from three unlinked loci for each of 93 strains collected worldwide, we detected a complex speciation process revealing overlapping reproductively isolated biological species, recent agamospecies and numerous relict lineages with unresolved phylogenetic positions. Genealogical concordance and recombination analyses confirm the existence of two genetically isolated agamospecies including T. harzianum sensu stricto and two hypothetical holomorphic species related to but different from H. lixii. The exact phylogenetic position of the majority of strains was not resolved and therefore attributed to a diverse network of recombining strains conventionally called 'pseudoharzianum matrix'. Since H. lixii and T. harzianum are evidently genetically isolated, the anamorph - teleomorph combination comprising H. lixii/T. harzianum in one holomorph must be rejected in favor of two separate species. Conclusions Our data illustrate a complex speciation within H. lixii - T. harzianum species group, which is based on coexistence and interaction of organisms with different evolutionary histories and on the absence of strict genetic borders between them. PMID:20359347

  3. Time-dependent thermocapillary convection in a Cartesian cavity - Numerical results for a moderate Prandtl number fluid

    NASA Technical Reports Server (NTRS)

    Peltier, L. J.; Biringen, S.

    1993-01-01

    The present numerical simulation explores a thermal-convective mechanism for oscillatory thermocapillary convection in a shallow Cartesian cavity for a Prandtl number 6.78 fluid. The computer program developed for this simulation integrates the two-dimensional, time-dependent Navier-Stokes equations and the energy equation by a time-accurate method on a stretched, staggered mesh. Flat free surfaces are assumed. The instability is shown to depend upon temporal coupling between large scale thermal structures within the flow field and the temperature sensitive free surface. A primary result of this study is the development of a stability diagram presenting the critical Marangoni number separating steady from the time-dependent flow states as a function of aspect ratio for the range of values between 2.3 and 3.8. Within this range, a minimum critical aspect ratio near 2.3 and a minimum critical Marangoni number near 20,000 are predicted below which steady convection is found.

  4. THEMATIC ACCURACY OF THE 1992 NATIONAL LAND-COVER DATA (NLCD) FOR THE EASTERN UNITED STATES: STATISTICAL METHODOLOGY AND REGIONAL RESULTS

    EPA Science Inventory

    The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...

  5. Numerical analysis of intensity signals resulting from genotyping pooled DNA samples in beef cattle and broiler chicken.

    PubMed

    Reverter, A; Henshall, J M; McCulloch, R; Sasazaki, S; Hawken, R; Lehnert, S A

    2014-05-01

    Pooled genomic DNA has been proposed as a cost-effective approach in genomewide association studies (GWAS). However, algorithms for genotype calling of biallelic SNP are not adequate with pooled DNA samples because they assume the presence of 2 fluorescent signals, 1 for each allele, and operate under the expectation that at most 2 copies of the variant allele can be found for any given SNP and DNA sample. We adapt analytical methodology from 2-channel gene expression microarray technology to SNP genotyping of pooled DNA samples. Using 5 datasets from beef cattle and broiler chicken of varying degrees of complexity in terms of design and phenotype, continuous and dichotomous, we show that both differential hybridization (M = green minus red intensity signal) and abundance (A = average of red and green intensities) provide useful information in the prediction of SNP allele frequencies. This is predominantly true when making inference about extreme SNP that are either nearly fixed or highly polymorphic. We propose the use of model-based clustering via mixtures of bivariate normal distributions as an optimal framework to capture the relationship between hybridization intensity and allele frequency from pooled DNA samples. The range of M and A values observed here are in agreement with those reported within the context of gene expression microarray and also with those from SNP array data within the context of analytical methodology for the identification of copy number variants. In particular, we confirm that highly polymorphic SNP yield a strong signal from both channels (red and green) while lowly or nonpolymorphic SNP yield a strong signal from 1 channel only. We further confirm that when the SNP allele frequencies are known, either because the individuals in the pools or from a closely related population are themselves genotyped, a multiple regression model with linear and quadratic components can be developed with high prediction accuracy. We conclude that when

  6. Numerical Methods For Chemically Reacting Flows

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1990-01-01

    Issues related to numerical stability, accuracy, and resolution discussed. Technical memorandum presents issues in numerical solution of hyperbolic conservation laws containing "stiff" (relatively large and rapidly changing) source terms. Such equations often used to represent chemically reacting flows. Usually solved by finite-difference numerical methods. Source terms generally necessitate use of small time and/or space steps to obtain sufficient resolution, especially at discontinuities, where incorrect mathematical modeling results in unphysical solutions.

  7. Source contributions to PM2.5 in Guangdong province, China by numerical modeling: Results and implications

    NASA Astrophysics Data System (ADS)

    Yin, Xiaohong; Huang, Zhijiong; Zheng, Junyu; Yuan, Zibing; Zhu, Wenbo; Huang, Xiaobo; Chen, Duohong

    2017-04-01

    As one of the most populous and developed provinces in China, Guangdong province (GD) has been experiencing regional haze problems. Identification of source contributions to ambient PM2.5 level is essential for developing effective control strategies. In this study, using the most up-to-date emission inventory and validated numerical model, source contributions to ambient PM2.5 from eight emission source sectors (agriculture, biogenic, dust, industry, power plant, residential, mobile and others) in GD in 2012 were quantified. Results showed that mobile sources are the dominant contributors to the ambient PM2.5 (24.0%) in the Pearl River Delta (PRD) region, the central and most developed area of GD, while industry sources are the major contributors (21.5% 23.6%) to those in the Northeastern GD (NE-GD) region and the Southwestern GD (SW-GD) region. Although many industries have been encouraged to move from the central GD to peripheral areas such as NE-GD and SW-GD, their emissions still have an important impact on the PM2.5 level in the PRD. In addition, agriculture sources are responsible for 17.5% to ambient PM2.5 in GD, indicating the importance of regulations on agricultural activities, which has been largely ignored in the current air quality management. Super-regional contributions were also quantified and their contributions to the ambient PM2.5 in GD are significant with notable seasonal differences. But they might be overestimated and further studies are needed to better quantify the transport impacts.

  8. A Study of The Eastern Mediterranean Hydrology and Circulation By Comparing Observation and High Resolution Numerical Model Results.

    NASA Astrophysics Data System (ADS)

    Alhammoud, B.; Béranger, K.; Mortier, L.; Crépon, M.

    The Eastern Mediterranean hydrology and circulation are studied by comparing the results of a high resolution primitive equation model (described in dedicated session: Béranger et al.) with observations. The model has a horizontal grid mesh of 1/16o and 43 z-levels in the vertical. The model was initialized with the MODB5 climatology and has been forced during 11 years by the daily sea surface fluxes provided by the European Centre for Medium-range Weather Forecasts analysis in a perpetual year mode corresponding to the year March 1998-February 1999. At the end of the run, the numerical model is able to accurately reproduce the major water masses of the Eastern Mediterranean Basin (Levantine Surface Water, modi- fied Atlantic Water, Levantine Intermediate Water, and Eastern Mediterranean Deep Water). Comparisons with the POEM observations reveal good agreement. While the initial conditions of the model are somewhat different from POEM observations, dur- ing the last year of the simulation, we found that the water mass stratification matches that of the observations quite well in the seasonal mean. During the 11 years of simulation, the model drifts slightly in the layers below the thermocline. Nevertheless, many important physical processes were reproduced. One example is that the dispersal of Adriatic Deep Water into the Levantine Basin is rep- resented. In addition, convective activity located in the northern part of the Levantine Basin occurs in Spring as expected. The surface circulation is in agreement with in-situ and satellite observations. Some well known mesoscale features of the upper thermocline circulation are shown. Sea- sonal variability of transports through Sicily, Otranto and Cretan straits are inves- tigated as well. This work was supported by the french MERCATOR project and SHOM.

  9. Assessment of the improvements in accuracy of aerosol characterization resulted from additions of polarimetric measurements to intensity-only observations using GRASP algorithm (Invited)

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Herman, M.; Fedorenko, A.; Lopatin, A.; Goloub, P.; Ducos, F.; Aspetsberger, M.; Planer, W.; Federspiel, C.

    2013-12-01

    During last few years we were developing GRASP (Generalized Retrieval of Aerosol and Surface Properties) algorithm designed for the enhanced characterization of aerosol properties from spectral, multi-angular polarimetric remote sensing observations. The concept of GRASP essentially relies on the accumulated positive research heritage from previous remote sensing aerosol retrieval developments, in particular those from the AERONET and POLDER retrieval activities. The details of the algorithm are described by Dubovik et al. (Atmos. Meas. Tech., 4, 975-1018, 2011). The GRASP retrieves properties of both aerosol and land surface reflectance in cloud-free environments. It is based on highly advanced statistically optimized fitting and deduces nearly 50 unknowns for each observed site. The algorithm derives a similar set of aerosol parameters as AERONET including detailed particle size distribution, the spectrally dependent the complex index of refraction and the fraction of non-spherical particles. The algorithm uses detailed aerosol and surface models and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are done on-line without using traditional look-up tables. In addition, the algorithm uses the new multi-pixel retrieval concept - a simultaneous fitting of a large group of pixels with additional constraints limiting the time variability of surface properties and spatial variability of aerosol properties. This principle is expected to result in higher consistency and accuracy of aerosol products compare to conventional approaches especially over bright surfaces where information content of satellite observations in respect to aerosol properties is limited. The GRASP is a highly versatile algorithm that allows input from both satellite and ground-based measurements. It also has essential flexibility in measurement processing. For example, if observation data set includes spectral

  10. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-10-11

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.

  11. Numerical analysis of wellbore integrity: results from a field study of a natural CO2 reservoir production well

    NASA Astrophysics Data System (ADS)

    Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.

    2008-12-01

    An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are

  12. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  13. An open-loop ground-water heat pump system: transient numerical modeling and site experimental results

    NASA Astrophysics Data System (ADS)

    Lo Russo, S.; Taddia, G.; Gnavi, L.

    2012-04-01

    KEY WORDS: Open-loop ground water heat pump; Feflow; Low-enthalpy; Thermal Affected Zone; Turin; Italy The increasing diffusion of low-enthalpy geothermal open-loop Groundwater Heat Pumps (GWHP) providing buildings air conditioning requires a careful assessment of the overall effects on groundwater system, especially in the urban areas where several plants can be close together and interfere. One of the fundamental aspects in the realization of an open loop low-enthalpy geothermal system is therefore the capacity to forecast the effects of thermal alteration produced in the ground, induced by the geothermal system itself. The impact on the groundwater temperature in the surrounding area of the re-injection well (Thermal Affected Zone - TAZ) is directly linked to the aquifer properties. The transient dynamic of groundwater discharge and temperature variations should be also considered to assess the subsurface environmental effects of the plant. The experimental groundwater heat pump system used in this study is installed at the "Politecnico di Torino" (NW Italy, Piedmont Region). This plant provides summer cooling needs for the university buildings. This system is composed by a pumping well, a downgradient injection well and a control piezometer. The system is constantly monitored by multiparameter probes measuring the dynamic of groundwater temperature. A finite element subsurface flow and transport simulator (FEFLOW) was used to investigate the thermal aquifer alteration. Simulations were continuously performed during May-October 2010 (cooling period). The numerical simulation of the heat transport in the aquifer was solved with transient conditions. The simulation was performed by considering only the heat transfer within the saturated aquifer, without any heat dispersion above or below the saturated zone due to the lack of detailed information regarding the unsaturated zone. Model results were compared with experimental temperature data derived from groundwater

  14. On the Improvement of Numerical Weather Prediction by Assimilation of Hub Height Wind Information in Convection-Resulted Models

    NASA Astrophysics Data System (ADS)

    Declair, Stefan; Stephan, Klaus; Potthast, Roland

    2015-04-01

    Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs). In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology strongly support the TSOs by developing innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key in the energy prediction process chain is the numerical weather prediction (NWP) system. With focus on wind energy, we face the model errors in the planetary boundary layer, which is characterized by strong spatial and temporal fluctuations in wind speed, to improve the basis of the weather dependent renewable energy prediction. Model data can be corrected by postprocessing techniques such as model output statistics and calibration using historical observational data. On the other hand, latest observations can be used in a preprocessing technique called data assimilation (DA). In DA, the model output from a previous time step is combined such with observational data, that the new model data for model integration initialization (analysis) fits best to the latest model data and the observational data as well. Therefore, model errors can be already reduced before the model integration. In this contribution, the results of an impact study are presented. A so-called OSSE (Observation Simulation System Experiment) is performed using the convective-resoluted COSMO-DE model of the German Weather Service and a 4D-DA technique, a Newtonian relaxation method also called nudging. Starting from a nature run (treated as the truth), conventional observations and artificial wind observations at hub height are generated. In a control run, the basic model setup of the nature run is slightly perturbed to drag the model away from the beforehand generated truth and a free forecast is computed based on the analysis using only conventional

  15. Summary Results of the Neptun Boil-Off Experiments to Investigate the Accuracy and Cooling Influence of LOFT Cladding-Surface Thermocouples (System 00)

    SciTech Connect

    E. L. Tolman S. N. Aksan

    1981-10-01

    Nine boil-off experiments were conducted in the Swiss NEPTUN Facility primarily to obtain experimental data for assessing the perturbation effects of LOFT thermocouples during simulated small-break core uncovery conditions. The data will also be useful in assessing computer model capability to predict thermal hydraulic response data for this type of experiment. System parameters that were varied for these experiments included heater rod power, system pressure, and initial coolant subcooling. The experiments showed that the LOFT thermocouples do not cause a significant cooling influence in the rods to which they are attached. Furthermore, the accuracy of the LOFT thermocouples is within 20 K at the peak cladding temperature zone.

  16. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  17. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  18. Thermodiffusion in concentrated ferrofluids: A review and current experimental and numerical results on non-magnetic thermodiffusion

    SciTech Connect

    Sprenger, Lisa Lange, Adrian; Odenbach, Stefan

    2013-12-15

    Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison of the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.

  19. Urinary Biomarker Panel to Improve Accuracy in Predicting Prostate Biopsy Result in Chinese Men with PSA 4–10 ng/mL

    PubMed Central

    Zhou, Yongqiang; Li, Yun; Li, Xiangnan

    2017-01-01

    This study aims to evaluate the effectiveness and clinical performance of a panel of urinary biomarkers to diagnose prostate cancer (PCa) in Chinese men with PSA levels between 4 and 10 ng/mL. A total of 122 patients with PSA levels between 4 and 10 ng/mL who underwent consecutive prostate biopsy at three hospitals in China were recruited. First-catch urine samples were collected after an attentive prostate massage. Urinary mRNA levels were measured by quantitative real-time polymerase chain reaction (qRT-PCR). The predictive accuracy of these biomarkers and prediction models was assessed by the area under the curve (AUC) of the receiver-operating characteristic (ROC) curve. The diagnostic accuracy of PCA3, PSGR, and MALAT-1 was superior to that of PSA. PCA3 performed best, with an AUC of 0.734 (95% CI: 0.641, 0.828) followed by MALAT-1 with an AUC of 0.727 (95% CI: 0.625, 0.829) and PSGR with an AUC of 0.666 (95% CI: 0.575, 0.749). The diagnostic panel with age, prostate volume, % fPSA, PCA3 score, PSGR score, and MALAT-1 score yielded an AUC of 0.857 (95% CI: 0.780, 0.933). At a threshold probability of 20%, 47.2% of unnecessary biopsies may be avoided whereas only 6.2% of PCa cases may be missed. This urinary panel may improve the current diagnostic modality in Chinese men with PSA levels between 4 and 10 ng/mL. PMID:28293631

  20. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  1. Investigating role of ice-ocean interaction on glacier dynamic: Results from numerical modeling applied to Petermann Glacier

    NASA Astrophysics Data System (ADS)

    Nick, F. M.; van der Veen, C. J.; Vieli, A.; Pattyn, F.; Hubbard, A.; Box, J. E.

    2010-12-01

    Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its ~17 km wide and ~ 60 km long floating ice-shelf is experiencing high rates of bottom melting. The recent partial disintegration of its shelf (in August 2010) presents a natural experiment to investigate the dynamic response of the ice sheet to its shelf retreat. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as shelf disintegration, bottom melting, sea ice or sikkusak disintegration and surface run off to the mass balance of Petermann Glacier and assess its stability. Our modeling study provides insights into the role of ice-ocean interaction, and on response of Petermann Glacier to its recent massive ice loss.

  2. Role of ice-ocean interaction on glacier instability: Results from numerical modelling applied to Petermann Glacier

    NASA Astrophysics Data System (ADS)

    Nick, Faezeh M.; Hubbard, Alun; van der Veen, Kees; Vieli, Andreas

    2010-05-01

    Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its 16 km wide and 80 km long floating tongue, experiences massive bottom melting. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as bottom melting, sea ice or sikkusak disintegration, surface run off and iceberg calving to the mass balance and instability of Petermann Glacier and its ice shelf. Our modelling study provides insights into the role of ice-ocean interaction, and on how to incorporate calving in ice sheet models, improving our ability to predict future ice sheet change.

  3. Role of ice-ocean interaction on glacier instability: Results from numerical modeling applied to Petermann Glacier (Invited)

    NASA Astrophysics Data System (ADS)

    Nick, F.; Hubbard, A.; Vieli, A.; van der Veen, C. J.; Box, J. E.; Bates, R.; Luckman, A. J.

    2009-12-01

    Calving of icebergs and bottom melting from ice shelves accounts for roughly half the ice transferred from the Greenland Ice Sheet into the surrounding ocean, and virtually all of the ice loss from the Antarctic Ice Sheet. Petermann Glacier (north Greenland) with its 16 km wide and 80 km long floating tongue, experiences massive bottom melting. We apply a numerical ice flow model using a physically-based calving criterion based on crevasse depth to investigate the contribution of processes such as bottom melting, sea ice or sikkusak disintegration, surface run off and iceberg calving to the mass balance and instability of Petermann Glacier and its ice shelf. Our modeling study provides insights into the role of ice-ocean interaction, and on how to incorporate calving in ice sheet models, improving our ability to predict future ice sheet change.

  4. Numerical simulations - Some results for the 2- and 3-D Hubbard models and a 2-D electron phonon model

    NASA Technical Reports Server (NTRS)

    Scalapino, D. J.; Sugar, R. L.; White, S. R.; Bickers, N. E.; Scalettar, R. T.

    1989-01-01

    Numerical simulations on the half-filled three-dimensional Hubbard model clearly show the onset of Neel order. Simulations of the two-dimensional electron-phonon Holstein model show the competition between the formation of a Peierls-CDW state and a superconducting state. However, the behavior of the partly filled two-dimensional Hubbard model is more difficult to determine. At half-filling, the antiferromagnetic correlations grow as T is reduced. Doping away from half-filling suppresses these correlations, and it is found that there is a weak attractive pairing interaction in the d-wave channel. However, the strength of the pair field susceptibility is weak at the temperatures and lattice sizes that have been simulated, and the nature of the low-temperature state of the nearly half-filled Hubbard model remains open.

  5. Preliminary numerical simulations of the 27 February 2010 Chile tsunami: first results and hints in a tsunami early warning perspective

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Tonini, R.; Armigliato, A.; Zaniboni, F.; Pagnoni, G.; Gallazzi, Sara; Bressan, Lidia

    2010-05-01

    The tsunamigenic earthquake (M 8.8) that occurred offshore central Chile on 27 February 2010 can be classified as a typical subduction-zone earthquake. The effects of the ensuing tsunami have been devastating along the Chile coasts, and especially between the cities of Valparaiso and Talcahuano, and in the Juan Fernandez islands. The tsunami propagated across the entire Pacific Ocean, hitting with variable intensity almost all the coasts facing the basin. While the far-field propagation was quite well tracked almost in real-time by the warning centres and reasonably well reproduced by the forecast models, the toll of lives and the severity of the damage caused by the tsunami in the near-field occurred with no local alert nor warning and sadly confirms that the protection of the communities placed close to the tsunami sources is still an unresolved problem in the tsunami early warning field. The purpose of this study is two-fold. On one side we perform numerical simulations of the tsunami starting from different earthquake models which we built on the basis of the preliminary seismic parameters (location, magnitude and focal mechanism) made available by the seismological agencies immediately after the event, or retrieved from more detailed and refined studies published online in the following days and weeks. The comparison with the available records of both offshore DART buoys and coastal tide-gauges is used to put some preliminary constraints on the best-fitting fault model. The numerical simulations are performed by means of the finite-difference code UBO-TSUFD, developed and maintained by the Tsunami Research Team of the University of Bologna, Italy, which can solve both the linear and non-linear versions of the shallow-water equations on nested grids. The second purpose of this study is to use the conclusions drawn in the previous part in a tsunami early warning perspective. In the framework of the EU-funded project DEWS (Distant Early Warning System), we will

  6. Quasi-Periodic Oscillations and Frequencies in AN Accretion Disk and Comparison with the Numerical Results from Non-Rotating Black Hole Computed by the Grh Code

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    The shocked wave created on the accretion disk after different physical phenomena (accretion flows with pressure gradients, star-disk interaction etc.) may be responsible observed Quasi Periodic Oscillations (QPOs) in X-ray binaries. We present the set of characteristics frequencies associated with accretion disk around the rotating and non-rotating black holes for one particle case. These persistent frequencies are results of the rotating pattern in an accretion disk. We compare the frequency's from two different numerical results for fluid flow around the non-rotating black hole with one particle case. The numerical results are taken from Refs. 1 and 2 using fully general relativistic hydrodynamical code with non-selfgravitating disk. While the first numerical result has a relativistic tori around the black hole, the second one includes one-armed spiral shock wave produced from star-disk interaction. Some physical modes presented in the QPOs can be excited in numerical simulation of relativistic tori and spiral waves on the accretion disk. The results of these different dynamical structures on the accretion disk responsible for QPOs are discussed in detail.

  7. Static correlations in macro-ionic suspensions: Analytic and numerical results in a hypernetted-chain-mean-spherical approximation

    NASA Astrophysics Data System (ADS)

    Khan, Sheema; Morton, Thomas L.; Ronis, David

    1987-05-01

    The static correlations in highly charged colloidal and micellar suspensions, with and without added electrolyte, are examined using the hypernetted-chain approximation (HNC) for the macro-ion-macro-ion correlations and the mean-spherical approximation for the other correlations. By taking the point-ion limit for the counter-ions, an analytic solution for the counter-ion part of the problem can be obtained; this maps the macro-ion part of the problem onto a one-component problem where the macro-ions interact via a screened Coulomb potential with the Gouy-Chapman form for the screening length and an effective charge that depends on the macro-ion-macro-ion pair correlations. Numerical solutions of the effective one-component equation in the HNC approximation are presented, and in particular, the effects of macro-ion charge, nonadditive core diameters, and added electrolyte are examined. As we show, there can be a strong renormalization of the effective macro-ion charge and reentrant melting in colloidal crystals.

  8. Higher-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1976-01-01

    A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.

  9. Ephemeral liquid water at the surface of the martian North Polar Residual Cap: Results of numerical modelling

    NASA Astrophysics Data System (ADS)

    Losiak, Anna; Czechowski, Leszek; Velbel, Michael A.

    2015-12-01

    Gypsum, a mineral that requires water to form, is common on the surface of Mars. Most of it originated before 3.5 Gyr when the Red Planet was more humid than now. However, occurrences of gypsum dune deposits around the North Polar Residual Cap (NPRC) seem to be surprisingly young: late Amazonian in age. This shows that liquid water was present on Mars even at times when surface conditions were as cold and dry as the present-day. A recently proposed mechanism for gypsum formation involves weathering of dust within ice (e.g., Niles, P.B., Michalski, J. [2009]. Nat. Geosci. 2, 215-220.). However, none of the previous studies have determined if this process is possible under current martian conditions. Here, we use numerical modelling of heat transfer to show that during the warmest days of the summer, solar irradiation may be sufficient to melt pure water ice located below a layer of dark dust particles (albedo ⩽ 0.13) lying on the steepest sections of the equator-facing slopes of the spiral troughs within martian NPRC. During the times of high irradiance at the north pole (every 51 ka; caused by variation of orbital and rotational parameters of Mars e.g., Laskar, J. et al. [2002]. Nature 419, 375-377.) this process could have taken place over larger parts of the spiral troughs. The existence of small amounts of liquid water close to the surface, even under current martian conditions, fulfils one of the main requirements necessary to explain the formation of the extensive gypsum deposits around the NPRC. It also changes our understanding of the degree of current geological activity on Mars and has important implications for estimating the astrobiological potential of Mars.

  10. Analysis of the global free infra-gravity wave climate for the SWOT mission, and preliminary results of numerical modelling

    NASA Astrophysics Data System (ADS)

    Rawat, A.; Aucan, J.; Ardhuin, F.

    2012-12-01

    All sea level variations of the order of 1 cm at scales under 30 km are of great interest for the future Surface Water Ocean Topography (SWOT) satellite mission. That satellite should provide high-resolution maps of the sea surface height for analysis of meso to sub-mesoscale currents, but that will require a filtering of all gravity wave motions in the data. Free infragravity waves (FIGWs) are generated and radiate offshore when swells and/or wind seas and their associated bound infragravity waves impact exposed coastlines. Free infragravity waves have dominant periods comprised between 1 and 10 minutes and horizontal wavelengths of up to tens of kilometers. Given the length scales of the infragravity waves wavelength and amplitude, the infragravity wave field will can a significant fraction the signal measured by the future SWOT mission. In this study, we analyze the data from recovered bottom pressure recorders of the Deep-ocean Assessment and Reporting of Tsunami (DART) program. This analysis includes data spanning several years between 2006 and 2010, from stations at different latitudes in the North and South Pacific, the North Atlantic, the Gulf of Mexico and the Caribbean Sea. We present and discuss the following conclusions: (1) The amplitude of free infragravity waves can reach several centimeters, higher than the precision sought for the SWOT mission. (2) The free infragravity signal is higher in the Eastern North Pacific than in the Western North Pacific, possibly due to smaller incident swell and seas impacting the nearby coastlines. (3) Free infragravity waves are higher in the North Pacific than in the North Atlantic, possibly owing to different average continental shelves configurations in the two basins. (4) There is a clear seasonal cycle at the high latitudes North Atlantic and Pacific stations that is much less pronounced or absent at the tropical stations, consistent with the generation mechanism of free infragravity waves. Our numerical model

  11. C13 urea breath test accuracy analysis against former C14 urea breath test technique: is there still a need for an indeterminate result category?

    PubMed

    Charest, Mathieu; Belair, Marc-Andre

    2017-03-09

    Helicobacter pylori (H. Pylori) infection is the leading cause of peptic ulcer disease. Purpose: To assess the difference in distribution of negative versus positive breath test results between the former C14 urea breath test (UBT) and the newer C13 UBT. Second, to determine if the use of an indeterminate category is still meaningful and what type of results should trigger a repeat testing. Methods: Retrospective survey was performed of all consecutive patients referred to our service for a UBT. We analysed 562 patients with C14 UBT and 454 patients with C13 UBT. Results: C13 negative results are distributed farther away from the cut-off value and grouped more tightly around the mean negative value, as compare to the more widely distributed C14 negative results. Distribution analysis of the negative results of the C13 UBT compare to the negative results of the C14 UBT reveals a statistically significant difference. Within the C13 UBT group, only 1 patient could have been classify as having an indeterminate result using the same indeterminate zone previously used with C14 UBT. This is significantly less frequent than what was previously found with C14 UBT. Discussion: Borderline negative result do occurs with C13 UBT, although less frequently then with the C14 UBT, and we will carefully monitored results falling between 3.0 and 3.5 %delta. C13 UBTis a safe and simple test for the patient, provides a clearer positive or negative test results for the clinician in the majority of cases.

  12. Evaluation of ground-penetrating radar to detect free-phase hydrocarbons in fractured rocks - Results of numerical modeling and physical experiments

    USGS Publications Warehouse

    Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.

    2000-01-01

    The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons

  13. Numerical orbit generators of artificial earth satellites

    NASA Astrophysics Data System (ADS)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  14. Relaxation dynamics of Sierpinski hexagon fractal polymer: Exact analytical results in the Rouse-type approach and numerical results in the Zimm-type approach

    NASA Astrophysics Data System (ADS)

    Jurjiu, Aurel; Galiceanu, Mircea; Farcasanu, Alexandru; Chiriac, Liviu; Turcu, Flaviu

    2016-12-01

    In this paper, we focus on the relaxation dynamics of Sierpinski hexagon fractal polymer. The relaxation dynamics of this fractal polymer is investigated in the framework of the generalized Gaussian structure model using both Rouse and Zimm approaches. In the Rouse-type approach, by performing real-space renormalization transformations, we determine analytically the complete eigenvalue spectrum of the connectivity matrix. Based on the eigenvalues obtained through iterative algebraic relations we calculate the averaged monomer displacement and the mechanical relaxation moduli (storage modulus and loss modulus). The evaluation of the dynamical properties in the Rouse-type approach reveals that they obey scaling in the intermediate time/frequency domain. In the Zimm-type approach, which includes the hydrodynamic interactions, the relaxation quantities do not show scaling. The theoretical findings with respect to scaling in the intermediate domain of the relaxation quantities are well supported by experimental results.

  15. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  16. On the Spatial and Temporal Accuracy of Overset Grid Methods for Moving Body Problems

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    A study of numerical attributes peculiar to an overset grid approach to unsteady aerodynamics prediction is presented. Attention is focused on the effect of spatial error associated with interpolation of intergrid boundary conditions and temporal error associated with explicit update of intergrid boundary points on overall solution accuracy. A set of numerical experiments are used to verify whether or not the use of simple interpolation for intergrid boundary conditions degrades the formal accuracy of a conventional second-order flow solver, and to quantify the error associated with explicit updating of intergrid boundary points. Test conditions correspond to the transonic regime. The validity of the numerical results presented here are established by comparison with existing numerical results of documented accuracy, and by direct comparison with experimental results.

  17. What Is Numerical Control?

    ERIC Educational Resources Information Center

    Goold, Vernell C.

    1977-01-01

    Numerical control (a technique involving coded, numerical instructions for the automatic control and performance of a machine tool) does not replace fundamental machine tool training. It should be added to the training program to give the student an additional tool to accomplish production rates and accuracy that were not possible before. (HD)

  18. Numerical evaluation of cavitation shedding structure around 3D Hydrofoil: Comparison of PANS, LES and RANS results with experiments

    NASA Astrophysics Data System (ADS)

    Ji, B.; Peng, X. X.; Long, X. P.; Luo, X. W.; Wu, Y. L.

    2015-12-01

    Results of cavitating turbulent flow simulation around a twisted hydrofoil were presented in the paper using the Partially-Averaged Navier-Stokes (PANS) method (Ji et al. 2013a), Large-Eddy Simulation (LES) (Ji et al. 2013b) and Reynolds-Averaged Navier-Stokes (RANS). The results are compared with available experimental data (Foeth 2008). The PANS and LES reasonably reproduce the cavitation shedding patterns around the twisted hydrofoil with primary and secondary shedding, while the RANS model fails to simulate the unsteady cavitation shedding phenomenon and yields an almost steady flow with a constant cavity shape and vapor volume. Besides, it is noted that the predicted shedding vapor cavity by PANS is more turbulent and the shedding vortex is stronger than that by LES, which is more consistent with experimental photos.

  19. Ion velocity distribution functions in argon and helium discharges: detailed comparison of numerical simulation results and experimental data

    NASA Astrophysics Data System (ADS)

    Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.

    2017-02-01

    Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.

  20. Influence of the quantum well models on the numerical simulation of planar InGaN/GaN LED results

    NASA Astrophysics Data System (ADS)

    Podgórski, J.; Woźny, J.; Lisik, Z.

    2016-04-01

    Within this paper, we present electric model of a light emitting diode (LED) made of gallium nitride (GaN) followed by examples of simulation results obtained by means of Sentaurus software, which is the part of the TCAD package. The aim of this work is to answer the question of whether physical models of quantum wells used in commercial software are suitable for a correct analysis of the lateral LEDs made of GaN.

  1. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  2. Initial results of efficacy and safety of Sofosbuvir among Pakistani Population: A real life trial - Hepatitis Eradication Accuracy Trial of Sofosbuvir (HEATS)

    PubMed Central

    Azam, Zahid; Shoaib, Muhammad; Javed, Masood; Sarwar, Muhammad Adnan; Shaikh, Hafeezullah; Khokhar, Nasir

    2017-01-01

    Objective: The uridine nucleotide analogue sofosbuvir is a selective inhibitor of hepatitis C virus (HCV) NS5B polymerase approved for the treatment of chronic HCV infection with genotypes 1 – 4. The objective of the study was to evaluate the interim results of efficacy and safety of regimens containing Sofosbuvir (Zoval) among Pakistani population with the rapid virologic response (RVR2/4 weeks) with HCV infections. Methods: This is a multicenter open label prospective observational study. Patients suffering from chronic Hepatitis C infection received Sofosbuvir (Zoval) 400 mg plus ribavirin (with or without peg interferon) for 12/24 weeks. The interim results of this study were rapid virological response on week 4. Data was analyzed using SPSS version 21 for descriptive statistics. Results: A total of 573 patients with HCV infection were included in the study. The mean age of patients was 46.07 ± 11.41 years. Out of 573 patients 535 (93.3%) were treatment naive, 26 (4.5%) were relapser, 7 (1.2%) were non-responders and 5 (1.0%) were partial responders. A rapid virologic response was reported in 563(98.2%) of patients with HCV infection after four weeks of treatment. The treatment was generally well tolerated. Conclusion: Sofosbuvir (Zoval) is effective and well tolerated in combination with ribavirin in HCV infected patients. PMID:28367171

  3. On the role of numerical simulations in studies of reduced gravity-induced physiological effects in humans. Results from NELME.

    NASA Astrophysics Data System (ADS)

    Perez-Poch, Antoni

    Computer simulations are becoming a promising research line of work, as physiological models become more and more sophisticated and reliable. Technological advances in state-of-the-art hardware technology and software allow nowadays for better and more accurate simulations of complex phenomena, such as the response of the human cardiovascular system to long-term exposure to microgravity. Experimental data for long-term missions are difficult to achieve and reproduce, therefore the predictions of computer simulations are of a major importance in this field. Our approach is based on a previous model developed and implemented in our laboratory (NELME: Numercial Evaluation of Long-term Microgravity Effects). The software simulates the behaviour of the cardiovascular system and different human organs, has a modular archi-tecture, and allows to introduce perturbations such as physical exercise or countermeasures. The implementation is based on a complex electrical-like model of this control system, using inexpensive development frameworks, and has been tested and validated with the available experimental data. The objective of this work is to analyse and simulate long-term effects and gender differences when individuals are exposed to long-term microgravity. Risk probability of a health impairement which may put in jeopardy a long-term mission is also evaluated. . Gender differences have been implemented for this specific work, as an adjustment of a number of parameters that are included in the model. Women versus men physiological differences have been therefore taken into account, based upon estimations from the physiology bibliography. A number of simulations have been carried out for long-term exposure to microgravity. Gravity varying continuosly from Earth-based to zero, and time exposure are the two main variables involved in the construction of results, including responses to patterns of physical aerobic ex-ercise and thermal stress simulating an extra

  4. Reaction Matrix Calculations in Neutron Matter with Alternating-Layer-Spin Structure under π0 Condensation. II ---Numerical Results---

    NASA Astrophysics Data System (ADS)

    Tamiya, K.; Tamagaki, R.

    1981-10-01

    Results obtained by applying a formulation based on the reaction matrix theory developed in I are given. Calculations by making use of a modified realistic potential, the Reid soft-core potential with the OPEP-part enhanced due to the isobar (Δ)-mixing, show that the transition to the [ALS] phase of quasi-neutrons corresponding to a typical π0 condensation occurs in the region of (2 ˜ 3) times the nuclear density. The most important ingredients responsible for this transition are the growth of the attractive 3P2 + 3F2 contribution mainly from the spin-parallel pairs in the same leyers and the reduction of the repulsive 3P1 contribution mainly from the spin-antiparallel pairs in the nearest layers; these mainfest themselves as the [ALS]-type localization develops. Properties of the matter under the new phase thus obtained such as the shape of the Fermi surface and the effective mass are discussed.

  5. Viscous effects in rapidly rotating stars with application to white-dwarf models. III - Further numerical results

    NASA Technical Reports Server (NTRS)

    Durisen, R. H.

    1975-01-01

    Improved viscous evolutionary sequences of differentially rotating, axisymmetric, nonmagnetic, zero-temperature white-dwarf models are constructed using the relativistically corrected degenerate electron viscosity. The results support the earlier conclusion that angular momentum transport due to viscosity does not lead to overall uniform rotation in many interesting cases. Qualitatively different behaviors are obtained, depending on how the total mass M and angular momentum J compare with the M and J values for which uniformly rotating models exist. Evolutions roughly determine the region in M and J for which models with a particular initial angular momentum distribution can reach carbon-ignition densities in 10 b.y. Such models may represent Type I supernova precursors.

  6. Application of the dynamic model of Saeman to an industrial rotary kiln incinerator: numerical and experimental results.

    PubMed

    Ndiaye, L G; Caillat, S; Chinnayya, A; Gambier, D; Baudoin, B

    2010-07-01

    In order to simulate granular materials structure in a rotary kiln under the steady-state regime, a mathematical model has been developed by Saeman (1951). This model enables the calculation of the bed profiles, the axial velocity and solids flow rate along the kiln. This model can be coupled with a thermochemical model, in the case of a reacting moving bed. This dynamic model was used to calculate the bed profile for an industrial size kiln and the model projections were validated by measurements in a 4 m diameter by 16 m long industrial rotary kiln. The effect of rotation speed under solids bed profile and the effect of the feed rate under filling degree were established. On the basis of the calculations and the experimental results a phenomenological relation for the residence time estimation was proposed for the rotary kiln.

  7. Full-dimensional quantum calculations of vibrational spectra of six-atom molecules. I. Theory and numerical results

    NASA Astrophysics Data System (ADS)

    Yu, Hua-Gen

    2004-02-01

    Two quantum mechanical Hamiltonians have been derived in orthogonal polyspherical coordinates, which can be formed by Jacobi and/or Radau vectors etc., for the study of the vibrational spectra of six-atom molecules. The Hamiltonians are expressed in an explicit Hermitian form in the spatial representation. Their matrix representations are described in both full discrete variable representation (DVR) and mixed DVR/nondirect product finite basis representation (FBR) bases. The two-layer Lanczos iteration algorithm [H.-G. Yu, J. Chem. Phys. 117, 8190 (2002)] is employed to solve the eigenvalue problem of the system. A strategy regarding how to carry out the Hamiltonian-vector products for a high-dimensional problem is discussed. By exploiting the inversion symmetry of molecules, a unitary sequential 1D matrix-vector multiplication algorithm is proposed to perform the action of the Hamiltonian on the wavefunction in a symmetrically adapted DVR or FBR basis in the azimuthal angular variables. An application to the vibrational energy levels of the molecular hydrogen trimer (H2)3 in full dimension (12D) is presented. Results show that the rigid-H2 approximation can underestimate the binding energy of the trimer by 27%. Finally, it is demonstrated that the two-layer Lanczos algorithm is also capable of computing the eigenvectors of the system with minor effort.

  8. Initialization of a Numerical Mesoscale Model with ALEXI-derived Volumetric Soil Moisture: Case Results and Validation

    NASA Astrophysics Data System (ADS)

    Mecikalski, J. R.; Hain, C. R.; Anderson, M. C.

    2006-05-01

    behavior of the four layers used in the land-surface models. This presentation will first overview how volumetric soil moisture estimates from ALEXI, NAM (EDAS), and LDAS will be validated against observations taken over the continental United States during the years of 2003/2004. Results will be quantified through statistical techniques. Second, upon successful validation of ALEXI-derived volumetric soil moisture, these estimates will be used to initialize mesoscale simulations using both the Weather and Forecasting Model (WRF) and MM5. The process used is a unique implementation of GOES satellite-estimated soil moisture, a process which has not yet been attempted. The focus of this presentation is to examine the effects of ALEXI-derived volumetric soil moisture on the simulations during several case study days in 2003 and 2004. The soil moisture estimates from ALEXI will initially be used to initialize these models at a spatial resolution of 10 km. Initialization of high-resolution land-surface characteristic datasets within our model simulations such as fraction of vegetative cover and leaf area index (LAI) will also be examined as these datasets are native to the calculation of parameters used in the derivation of ALEXI soil moisture. This process will help to quantify the sensitivity and importance of a higher-resolution soil moisture dataset, and one that does not rely on the assimilation of antecedent precipitation. Results will be quantified through statistical verification techniques.

  9. SLAC E155 and E155x Numeric Data Results and Data Plots: Nucleon Spin Structure Functions

    DOE Data Explorer

    The nucleon spin structure functions g1 and g2 are important tools for testing models of nucleon structure and QCD. Experiments at CERN, DESY, and SLAC have measured g1 and g2 using deep inelastic scattering of polarized leptons on polarized nucleon targets. The results of these experiments have established that the quark component of the nucleon helicity is much smaller than naive quark-parton model predictions. The Bjorken sum rule has been confirmed within the uncertainties of experiment and theory. The experiment E155 at SLAC collected data in March and April of 1997. Approximately 170 million scattered electron events were recorded to tape. (Along with several billion inclusive hadron events.) The data were collected using three independent fixed-angle magnetic spectrometers, at approximately 2.75, 5.5, and 10.5 degrees. The momentum acceptance of the 2.75 and 5.5 degree spectrometers ranged from 10 to 40 GeV, with momentum resolution of 2-4%. The 10.5 degree spectrometer, new for E155, accepted events of 7 GeV to 20 GeV. Each spectrometer used threshold gas Cerenkov counters (for particle ID), a segmented lead-glass calorimeter (for energy measurement and particle ID), and plastic scintillator hodoscopes (for tracking and momentum measurement). The polarized targets used for E155 were 15NH3 and 6LiD, as targets for measuring the proton and deuteron spin structure functions respectively. Experiment E155x recently concluded a successful two-month run at SLAC. The experiment was designed to measure the transverse spin structure functions of the proton and deuteron. The E155 target was also recently in use at TJNAF's Hall C (E93-026) and was returned to SLAC for E155x. E155x hopes to reduce the world data set errors on g2 by a factor of three. [Copied from http://www.slac.stanford.edu/exp/e155/e155_nickeltour.html, an information summary linked off the E155 home page at http://www.slac.stanford.edu/exp/e155/e155_home.html. The extension run, E155x, also makes

  10. Computer code for scattering from impedance bodies of revolution. Part 3: Surface impedance with s and phi variation. Analytical and numerical results

    NASA Technical Reports Server (NTRS)

    Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.

    1993-01-01

    The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.

  11. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  12. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Vandoormaal, J. P.; Turan, A.; Raithby, G. D.

    1986-01-01

    The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.

  13. On the energy dependence of the radial diffusion coefficient and spectra of inner radiation belt particles - Analytic solutions and comparison with numerical results

    NASA Technical Reports Server (NTRS)

    Westphalen, H.; Spjeldvik, W. N.

    1982-01-01

    A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.

  14. 10,000-fold concentration increase in proteins in a cascade microchip using anionic ITP by a 3-D numerical simulation with experimental results.

    PubMed

    Bottenus, Danny; Jubery, Talukder Zaki; Dutta, Prashanta; Ivory, Cornelius F

    2011-02-01

    This paper describes both the experimental application and 3-D numerical simulation of isotachophoresis (ITP) in a 3.2 cm long "cascade" poly(methyl methacrylate) (PMMA) microfluidic chip. The microchip includes 10 × reductions in both the width and depth of the microchannel, which decreases the overall cross-sectional area by a factor of 100 between the inlet (cathode) and outlet (anode). A 3-D numerical simulation of ITP is outlined and is a first example of an ITP simulation in three dimensions. The 3-D numerical simulation uses COMSOL Multiphysics v4.0a to concentrate two generic proteins and monitor protein migration through the microchannel. In performing an ITP simulation on this microchip platform, we observe an increase in concentration by over a factor of more than 10,000 due to the combination of ITP stacking and the reduction in cross-sectional area. Two fluorescent proteins, green fluorescent protein and R-phycoerythrin, were used to experimentally visualize ITP through the fabricated microfluidic chip. The initial concentration of each protein in the sample was 1.995 μg/mL and, after preconcentration by ITP, the final concentrations of the two fluorescent proteins were 32.57 ± 3.63 and 22.81 ± 4.61 mg/mL, respectively. Thus, experimentally the two fluorescent proteins were concentrated by over a factor of 10,000 and show good qualitative agreement with our simulation results.

  15. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  16. The influence of variability of calculation grids on the results of numerical modeling of geothermal doublets - an example from the Choszczno area, north-western Poland

    NASA Astrophysics Data System (ADS)

    Wachowicz-Pyzik, A.; Sowiżdżał, A.; Pająk, L.

    2016-09-01

    The numerical modeling enables us to reduce the risk related to the selection of best localization of wells. Moreover, at the stage of production, modeling is a suitable tool for optimization of well operational parameters, which guarantees the long life of doublets. The thorough selection of software together with relevant methodology applied to generation of numerical models significantly improve the quality of obtained results. In the following paper, we discuss the impact of density of calculation grid on the results of geothermal doublet simulation with the TOUGH2 code, which applies the finite-difference method. The study area is located between the Szczecin Trough and the Fore-sudetic Monocline, where the Choszczno IG-1 well has been completed. Our research was divided into the two stages. At the first stage, we examined the changes of density of polygon calculation grids used in computations of operational parameters of geothermal doublets. At the second stage, we analyzed the influence of distance between the production and the injection wells on variability in time of operational parameters. The results demonstrated that in both studied cases, the largest differences occurred in pressures measured in production and injection wells whereas the differences in temperatures were less pronounced.

  17. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  18. Stochastic FDTD accuracy improvement through correlation coefficient estimation

    NASA Astrophysics Data System (ADS)

    Masumnia Bisheh, Khadijeh; Zakeri Gatabi, Bijan; Andargoli, Seyed Mehdi Hosseini

    2015-04-01

    This paper introduces a new scheme to improve the accuracy of the stochastic finite difference time domain (S-FDTD) method. S-FDTD, reported recently by Smith and Furse, calculates the variations in the electromagnetic fields caused by variability or uncertainty in the electrical properties of the materials in the model. The accuracy of the S-FDTD method is controlled by the approximations for correlation coefficients between the electrical properties of the materials in the model and the fields propagating in them. In this paper, new approximations for these correlation coefficients are obtained using Monte Carlo method with a small number of runs, terming them as Monte Carlo correlation coefficients (MC-CC). Numerical results for two bioelectromagnetic simulation examples demonstrate that MC-CC can improve the accuracy of the S-FDTD method and yield more accurate results than previous approximations.

  19. Simultaneous Laser Raman-rayleigh-lif Measurements and Numerical Modeling Results of a Lifted Turbulent H2/N2 Jet Flame in a Vitiated Coflow

    NASA Technical Reports Server (NTRS)

    Cabra, R.; Chen, J. Y.; Dibble, R. W.; Myhrvold, T.; Karpetis, A. N.; Barlow, R. S.

    2002-01-01

    An experiment and numerical investigation is presented of a lifted turbulent H2/N2 jet flame in a coflow of hot, vitiated gases. The vitiated coflow burner emulates the coupling of turbulent mixing and chemical kinetics exemplary of the reacting flow in the recirculation region of advanced combustors. It also simplifies numerical investigation of this coupled problem by removing the complexity of recirculating flow. Scalar measurements are reported for a lifted turbulent jet flame of H2/N2 (Re = 23,600, H/d = 10) in a coflow of hot combustion products from a lean H2/Air flame ((empty set) = 0.25, T = 1,045 K). The combination of Rayleigh scattering, Raman scattering, and laser-induced fluorescence is used to obtain simultaneous measurements of temperature and concentrations of the major species, OH, and NO. The data attest to the success of the experimental design in providing a uniform vitiated coflow throughout the entire test region. Two combustion models (PDF: joint scalar Probability Density Function and EDC: Eddy Dissipation Concept) are used in conjunction with various turbulence models to predict the lift-off height (H(sub PDF)/d = 7,H(sub EDC)/d = 8.5). Kalghatgi's classic phenomenological theory, which is based on scaling arguments, yields a reasonably accurate prediction (H(sub K)/d = 11.4) of the lift-off height for the present flame. The vitiated coflow admits the possibility of auto-ignition of mixed fluid, and the success of the present parabolic implementation of the PDF model in predicting a stable lifted flame is attributable to such ignition. The measurements indicate a thickened turbulent reaction zone at the flame base. Experimental results and numerical investigations support the plausibility of turbulent premixed flame propagation by small scale (on the order of the flame thickness) recirculation and mixing of hot products into reactants and subsequent rapid ignition of the mixture.

  20. Response of major Greenland outlet glaciers to oceanic and atmospheric forcing: Results from numerical modeling on Petermann, Jakobshavn and Helheim Glacier.

    NASA Astrophysics Data System (ADS)

    Nick, F. M.; Vieli, A.; Pattyn, F.; Van de Wal, R.

    2011-12-01

    Oceanic forcing has been suggested as a major trigger for dynamic changes of Greenland outlet glaciers. Significant melting near their calving front or beneath the floating tongue and reduced support from sea ice or ice melange in front of their calving front can result in retreat of the terminus or the grounding line, and an increase in calving activities. Depending on the geometry and basal topography of the glacier, these oceanic forcing can affect the glacier dynamic differently. Here, we carry out a comparison study between three major outlet glaciers in Greenland and investigate the impact of a warmer ocean on glacier dynamics and ice discharge. We present results from a numerical ice-flow model applied to Petermann Glacier in the north, Jakobshavn Glacier in the west, and Helheim Glacier in the southeast of Greenland.

  1. The measurement of enhancement in mathematical abilities as a result of joint cognitive trainings in numerical and visual- spatial skills: A preliminary study

    NASA Astrophysics Data System (ADS)

    Agus, M.; Mascia, M. L.; Fastame, M. C.; Melis, V.; Pilloni, M. C.; Penna, M. P.

    2015-02-01

    A body of literature shows the significant role of visual-spatial skills played in the improvement of mathematical skills in the primary school. The main goal of the current study was to investigate the impact of a combined visuo-spatial and mathematical training on the improvement of mathematical skills in 146 second graders of several schools located in Italy. Participants were presented single pencil-and-paper visuo-spatial or mathematical trainings, computerised version of the above mentioned treatments, as well as a combined version of computer-assisted and pencil-and-paper visuo-spatial and mathematical trainings, respectively. Experimental groups were presented with training for 3 months, once a week. All children were treated collectively both in computer-assisted or pencil-and-paper modalities. At pre and post-test all our participants were presented with a battery of objective tests assessing numerical and visuo-spatial abilities. Our results suggest the positive effect of different types of training for the empowerment of visuo-spatial and numerical abilities. Specifically, the combination of computerised and pencil-and-paper versions of visuo-spatial and mathematical trainings are more effective than the single execution of the software or of the pencil-and-paper treatment.

  2. Comparison of the Structurally Controlled Landslides Numerical Model Results to the M 7.2 2013 Bohol Earthquake Co-seismic Landslides

    NASA Astrophysics Data System (ADS)

    Macario Galang, Jan Albert; Narod Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo

    2015-04-01

    The M 7.2 October 15, 2013 Bohol earthquake is the most destructive earthquake to hit the Philippines since 2012. The epicenter was located in Sagbayan municipality, central Bohol and was generated by a previously unmapped reverse fault called the "Inabanga Fault". Its name, taken after the barangay (village) where the fault is best exposed and was first seen. The earthquake resulted in 209 fatalities and over 57 billion USD worth of damages. The earthquake generated co-seismic landslides most of which were related to fault structures. Unlike rainfall induced landslides, the trigger for co-seismic landslides happen without warning. Preparedness against this type of landslide therefore, relies heavily on the identification of fracture-related unstable slopes. To mitigate the impacts of co-seismic landslide hazards, morpho-structural orientations or discontinuity sets were mapped in the field with the aid of a 2012 IFSAR Digital Terrain Model (DTM) with 5-meter pixel resolution and < 0.5 meter vertical accuracy. Coltop 3D software was then used to identify similar structures including measurement of their dip and dip directions. The chosen discontinuity sets were then keyed into Matterocking software to identify potential rock slide zones due to planar or wedged discontinuities. After identifying the structurally-controlled unstable slopes, the rock mass propagation extent of the possible rock slides was simulated using Conefall. The results were compared to a post-earthquake landslide inventory of 456 landslides. Out the total number of landslides identified from post-earthquake high-resolution imagery, 366 or 80% intersect the structural-controlled hazard areas of Bohol. The results show the potential of this method to identify co-seismic landslide hazard areas for disaster mitigation. Along with computer methods to simulate shallow landslides, and debris flow paths, located structurally-controlled unstable zones can be used to mark unsafe areas for settlement. The

  3. Development of a system for the numerical simulation of Euler flows, with results of preliminary 3-D propeller-slipstream/exhaust-jet calculations

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-01-01

    The current status of a computer program system for the numerical simulation of Euler flows is presented. Preliminary test calculation results are shown. They concern the three-dimensional flow around a wing-nacelle-propeller-outlet configuration. The system is constructed to execute four major tasks: block decomposition of the flow domain around given, possibly complex, three-dimensional aerodynamic surfaces; grid generation on the blocked flow domain; Euler-flow simulation on the blocked grid; and graphical visualization of the computed flow on the blocked grid, and postprocessing. The system consists of about 20 codes interfaced by files. Most of the required tasks can be executed. The geometry of complex aerodynamic surfaces in three-dimensional space can be handled. The validation test showed that the system must be improved to increase the speed of the grid generation process.

  4. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  5. Active compensation of aperture discontinuities for WFIRST-AFTA: analytical and numerical comparison of propagation methods and preliminary results with a WFIRST-AFTA-like pupil

    NASA Astrophysics Data System (ADS)

    Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi

    2016-03-01

    The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in

  6. High-order numerical solutions using cubic splines

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.

  7. Use of borehole radar reflection logging to monitor steam-enhanced remediation in fractured limestone-results of numerical modelling and a field experiment

    USGS Publications Warehouse

    Gregoire, C.; Joesten, P.K.; Lane, J.W.

    2006-01-01

    Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature

  8. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  9. Flow Matching Results of an MHD Energy Bypass System on a Supersonic Turbojet Engine Using the Numerical Propulsion System Simulation (NPSS) Environment

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2011-01-01

    Flow matching has been successfully achieved for an MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment helped perform a thermodynamic cycle analysis to properly match the flows from an inlet employing a MHD energy bypass system (consisting of an MHD generator and MHD accelerator) on a supersonic turbojet engine. Working with various operating conditions (such as the applied magnetic field, MHD generator length and flow conductivity), interfacing studies were conducted between the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis. This paper further describes the analysis of a supersonic turbojet engine with an MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to a range of 0 to 7.0 Mach with specific net thrust range of 740 N-s/kg (at ambient Mach = 3.25) to 70 N-s/kg (at ambient Mach = 7). These results were achieved with an applied magnetic field of 2.5 Tesla and conductivity levels in a range from 2 mhos/m (ambient Mach = 7) to 5.5 mhos/m (ambient Mach = 3.5) for an MHD generator length of 3 m.

  10. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  11. Methodology for high accuracy contact angle measurement.

    PubMed

    Kalantarian, A; David, R; Neumann, A W

    2009-12-15

    A new version of axisymmetric drop shape analysis (ADSA) called ADSA-NA (ADSA-no apex) was developed for measuring interfacial properties for drop configurations without an apex. ADSA-NA facilitates contact angle measurements on drops with a capillary protruding into the drop. Thus a much simpler experimental setup, not involving formation of a complete drop from below through a hole in the test surface, may be used. The contact angles of long-chained alkanes on a commercial fluoropolymer, Teflon AF 1600, were measured using the new method. A new numerical scheme was incorporated into the image processing to improve the location of the contact points of the liquid meniscus with the solid substrate to subpixel resolution. The images acquired in the experiments were also analyzed by a different drop shape technique called theoretical image fitting analysis-axisymmetric interfaces (TIFA-AI). The results were compared with literature values obtained by means of the standard ADSA for sessile drops with the apex. Comparison of the results from ADSA-NA with those from TIFA-AI and ADSA reveals that, with different numerical strategies and experimental setups, contact angles can be measured with an accuracy of less than 0.2 degrees. Contact angles and surface tensions measured from drops with no apex, i.e., by means of ADSA-NA and TIFA-AI, were considerably less scattered than those from complete drops with apex. ADSA-NA was also used to explore sources of improvement in contact angle resolution. It was found that using an accurate value of surface tension as an input enhances the accuracy of contact angle measurements.

  12. How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy.

    PubMed

    Weaver, Steven A; Ucar, Zennure; Bettinger, Pete; Merry, Krista

    2015-01-01

    The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an

  13. Initial Flow Matching Results of MHD Energy Bypass on a Supersonic Turbojet Engine Using the Numerical Propulsion System Simulation (NPSS) Environment

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2010-01-01

    Preliminary flow matching has been demonstrated for a MHD energy bypass system on a supersonic turbojet engine. The Numerical Propulsion System Simulation (NPSS) environment was used to perform a thermodynamic cycle analysis to properly match the flows from an inlet to a MHD generator and from the exit of a supersonic turbojet to a MHD accelerator. Working with various operating conditions such as the enthalpy extraction ratio and isentropic efficiency of the MHD generator and MHD accelerator, interfacing studies were conducted between the pre-ionizers, the MHD generator, the turbojet engine, and the MHD accelerator. This paper briefly describes the NPSS environment used in this analysis and describes the NPSS analysis of a supersonic turbojet engine with a MHD generator/accelerator energy bypass system. Results from this study have shown that using MHD energy bypass in the flow path of a supersonic turbojet engine increases the useful Mach number operating range from 0 to 3.0 Mach (not using MHD) to an explored and desired range of 0 to 7.0 Mach.

  14. Distribution of Groundwater Ages at Public-Supply Wells: Comparison of Results from Lumped Parameter and Numerical Inverse Models with Multiple Environmental Tracers

    NASA Astrophysics Data System (ADS)

    Eberts, S.; Bohlke, J. K.

    2009-12-01

    Estimates of groundwater age distributions at public-supply wells can provide insight into the vulnerability of these wells to contamination. Such estimates can be used to explore past and future water-quality trends and contaminant peak concentrations when combined with information on contaminant input at the water table. Information on groundwater age distributions, however, is not routinely applied to water quality issues at public-supply wells. This may be due, in part, to the difficulty in obtaining such estimates from poorly characterized aquifers with limited environmental tracer data. To this end, we compared distributions of groundwater ages in discharge from public-supply wells estimated from age tracer data (SF6, CFCs, 3H, 3He) using two different inverse modeling approaches: relatively simple lumped parameter models and more complex distributed-parameter numerical flow models with particle tracking. These comparisons were made in four contrasting hydrogeologic settings across the United States: unconsolidated alluvial fan sediments, layered confined unconsolidated sediments, unconsolidated valley-fill sediments, and carbonate rocks. In all instances, multiple age tracer measurements for the public-supply well of interest were available. We compared the following quantities, which were derived from simulated breakthrough curves that were generated using the various estimated age distributions for the selected wells and assuming the same hypothetical contaminant input: time lag to peak concentration, dilution at peak concentration, and contaminant arrival and flush times. Apparent tracer-based ages and mean and median simulated ages also were compared. For each setting, both types of models yielded similar age distributions and concentration trends, when based on similar conceptual models of local hydrogeology and calibrated to the same tracer measurements. Results indicate carefully chosen and calibrated simple lumped parameter age distribution models

  15. A three-dimensional boundary layer scheme: stability and accuracy analyses

    NASA Astrophysics Data System (ADS)

    Horri-Naceur, Jalil; Buisine, Daniel

    2002-03-01

    We present a numerical scheme for the calculation of incompressible three-dimensional boundary layers (3DBL), designed to take advantage of the 3DBL model's overall hyperbolic nature, which is linked to the existence of wedge-shaped dependence and influence zones. The proposed scheme, explicit along the wall and implicit in the normal direction, allows large time steps, thus enabling fast convergence. In order to keep this partly implicit character, the control volumes for the mass and momentum balances are not staggered along the wall. This results in a lack of numerical viscosity, making the scheme unstable. The implementation of a numerical diffusion, suited to the local zone of influence, restores the stability of the boundary layer scheme while preserving second-order space accuracy. The purpose of this article is to present the analytical and numerical studies carried out to establish the scheme's accuracy and stability properties. Copyright

  16. A numerical method for phase-change problems

    NASA Technical Reports Server (NTRS)

    Kim, Charn-Jung; Kaviany, Massoud

    1990-01-01

    A highly accurate and efficient finite-difference method for phase-change problems with multiple moving boundaries of irregular shape is developed by employing a coordinate transformation that immobilizes moving boundaries and preserves the conservative forms of the original governing equations. The numerical method is first presented for one-dimensional phase-change problems (involving large density variation between phases, heat generation, and multiple moving boundaries) and then extended to solve two-dimensional problems (without change of densities between phases). Numerical solutions are obtained non-iteratively using an explicit treatment of the interfacial mass and energy balances and an implicit treatment of the temperature field equations. The accuracy and flexibility of the present numerical method are verified by solving some phase-change problems and comparing the results with existing analytical, semi-analytical and numerical solutions. Results indicate that one- and two-dimensional phase-change problems can be handled easily with excellent accuracies.

  17. Numerical comparison of Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1977-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  18. Numerical Simulation of Heliospheric Transients Approaching Geospace

    DTIC Science & Technology

    2009-12-01

    12/15/08 – 12/14/09 Numerical Simulation of Heliospheric Transients Approaching Geospace Report by Dusan Odstrcil, University of Colorado...simulations of heliospheric transients approaching geospace . The project was supervised by Dr. Dusan Odstrcil at the University of Colorado (CU...plays a key role in the prediction accuracy of heliospheric transients approaching geospace . This report presents main results achieved within the

  19. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  20. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  1. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  2. Dust trajectory sensor: Accuracy and data analysis

    SciTech Connect

    Xie, J.; Horanyi, M.; Sternovsky, Z.; Gruen, E.; Duncan, N.; Drake, K.; Le, H.; Auer, S.; Srama, R.

    2011-10-15

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Gruen, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Gruen, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1 deg. in direction.

  3. Numerical simulation of precipitation formation in the case orographically induced convective cloud: Comparison of the results of bin and bulk microphysical schemes

    NASA Astrophysics Data System (ADS)

    Sarkadi, N.; Geresdi, I.; Thompson, G.

    2016-11-01

    In this study, results of bulk and bin microphysical schemes are compared in the case of idealized simulations of pre-frontal orographic clouds with enhanced embedded convection. The description graupel formation by intensive riming of snowflakes was improved compared to prior versions of each scheme. Two methods of graupel melting coincident with collisions with water drops were considered: (1) all simulated melting and collected water drops increase the amount of melted water on the surface of graupel particles with no shedding permitted; (2) also no shedding permitted due to melting, but the collision with the water drops can induce shedding from the surface of the graupel particles. The results of the numerical experiments show: (i) The bin schemes generate graupel particles more efficiently by riming than the bulk scheme does; the intense riming of snowflakes was the most dominant process for the graupel formation. (ii) The collision-induced shedding significantly affects the evolution of the size distribution of graupel particles and water drops below the melting level. (iii) The three microphysical schemes gave similar values for the domain integrated surface precipitation, but the patterns reveal meaningful differences. (iv) Sensitivity tests using the bulk scheme show that the depth of the melting layer is sensitive to the description of the terminal velocity of the melting snow. (v) Comparisons against Convair-580 flight measurements suggest that the bin schemes simulate well the evolution of the pristine ice particles and liquid drops, while some inaccuracy can occur in the description of snowflakes riming. (vi) The bin scheme with collision-induced shedding reproduced well the quantitative characteristics of the observed bright band.

  4. Topography and tectonics of the central New Madrid seismic zone: Results of numerical experiements using a three-dimensional boundary element program

    NASA Technical Reports Server (NTRS)

    Gomberg, Joan; Ellis, Michael

    1994-01-01

    We present results of a series of numerical experiments designed to test hypothetical mechanisms that derive deformation in the New Madrid seismic zone. Experiments are constrained by subtle topography and the distribution of seismicity in the region. We use a new boundary element algorithm that permits calcuation of the three-dimensional deformation field. Surface displacement fields are calculated for the New Madrid zone under both far-field (plate tectonics scale) and locally derived driving strains. Results demonstrate that surface displacement fields cannot distinguish between either a far-field simple or pure shear strain field or one that involves a deep shear zone beneath the upper crustal faults. Thus, neither geomorphic nor geodetic studies alone are expected to reveal the ultimate driving mechanism behind the present-day deformation. We have also tested hypotheses about strain accommodation within the New Madrid contractional step-over by including linking faults, two southwest dipping and one vertical, recently inferred from microearthquake data. Only those models with step-over faults are able to predict the observed topography. Surface displacement fields for long-term, relaxed deformation predict the distribution of uplift and subsidence in the contractional step-over remarkably well. Generation of these displacement fields appear to require slip on both the two northeast trending vertical faults and the two dipping faults in the step-over region, with very minor displacements occurring during the interseismic period when the northeast trending vertical faults are locked. These models suggest that the gently dippling central step-over fault is a reverse fault and that the steeper fault, extending to the southeast of the step-over, acts as a normal fault over the long term.

  5. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  6. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  7. Dynamics of plume-triple junction interaction: Results from a series of three-dimensional numerical models and implications for the formation of oceanic plateaus

    NASA Astrophysics Data System (ADS)

    Dordevic, Mladen; Georgen, Jennifer

    2016-03-01

    Mantle plumes rising in the vicinity of mid-ocean ridges often generate anomalies in melt production and seafloor depth. This study investigates the dynamical interactions between a mantle plume and a ridge-ridge-ridge triple junction, using a parameter space approach and a suite of steady state, three-dimensional finite element numerical models. The top domain boundary is composed of three diverging plates, with each assigned half-spreading rates with respect to a fixed triple junction point. The bottom boundary is kept at a constant temperature of 1350°C except where a two-dimensional, Gaussian-shaped thermal anomaly simulating a plume is imposed. Models vary plume diameter, plume location, the viscosity contrast between plume and ambient mantle material, and the use of dehydration rheology in calculating viscosity. Importantly, the model results quantify how plume-related anomalies in mantle temperature pattern, seafloor depth, and crustal thickness depend on the specific set of parameters. To provide an example, one way of assessing the effect of conduit position is to calculate normalized area, defined to be the spatial dispersion of a given plume at specific depth (here selected to be 50 km) divided by the area occupied by the same plume when it is located under the triple junction. For one particular case modeled where the plume is centered in an intraplate position 100 km from the triple junction, normalized area is just 55%. Overall, these models provide a framework for better understanding plateau formation at triple junctions in the natural setting and a tool for constraining subsurface geodynamical processes and plume properties.

  8. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended.

  9. Fast prediction of pulsed nonlinear acoustic fields from clinically relevant sources using time-averaged wave envelope approach: comparison of numerical simulations and experimental results.

    PubMed

    Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A

    2008-12-01

    The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters

  10. Accuracy of non-Newtonian Lattice Boltzmann simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Daniel; Schneider, Andreas; Böhle, Martin

    2015-11-01

    This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.

  11. Spacecraft attitude determination accuracy from mission experience

    NASA Astrophysics Data System (ADS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-10-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  12. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  13. Numerical solutions of nonlinear wave equations

    SciTech Connect

    Kouri, D.J.; Zhang, D.S.; Wei, G.W.; Konshak, T.; Hoffman, D.K.

    1999-01-01

    Accurate, stable numerical solutions of the (nonlinear) sine-Gordon equation are obtained with particular consideration of initial conditions that are exponentially close to the phase space homoclinic manifolds. Earlier local, grid-based numerical studies have encountered difficulties, including numerically induced chaos for such initial conditions. The present results are obtained using the recently reported distributed approximating functional method for calculating spatial derivatives to high accuracy and a simple, explicit method for the time evolution. The numerical solutions are chaos-free for the same conditions employed in previous work that encountered chaos. Moreover, stable results that are free of homoclinic-orbit crossing are obtained even when initial conditions are within 10{sup {minus}7} of the phase space separatrix value {pi}. It also is found that the present approach yields extremely accurate solutions for the Korteweg{endash}de Vries and nonlinear Schr{umlt o}dinger equations. Our results support Ablowitz and co-workers{close_quote} conjecture that ensuring high accuracy of spatial derivatives is more important than the use of symplectic time integration schemes for solving solitary wave equations. {copyright} {ital 1999} {ital The American Physical Society}

  14. Higher-order numerical solutions using cubic splines. [for partial differential equations

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Khosla, P. K.

    1975-01-01

    A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-

  15. Glioma residual or recurrence versus radiation necrosis: accuracy of pentavalent technetium-99m-dimercaptosuccinic acid [Tc-99m (V) DMSA] brain SPECT compared to proton magnetic resonance spectroscopy (1H-MRS): initial results.

    PubMed

    Amin, Amr; Moustafa, Hosna; Ahmed, Ebaa; El-Toukhy, Mohamed

    2012-02-01

    We compared pentavalent technetium-99m dimercaptosuccinic acid (Tc-99m (V) DMSA) brain single photon emission computed tomography (SPECT) and proton magnetic resonance spectroscopy ((1)H-MRS) for the detection of residual or recurrent gliomas after surgery and radiotherapy. A total of 24 glioma patients, previously operated upon and treated with radiotherapy, were studied. SPECT was acquired 2-3 h post-administration of 555-740 MBq of Tc-99m (V) DMSA. Lesion to normal (L/N) delayed uptake ratio was calculated as: mean counts of tumor ROI (L)/mean counts of normal mirror symmetric ROI (N). (1)H-MRS was performed using a 1.5-T scanner equipped with a spectroscopy package. SPECT and (1)H-MRS results were compared with pathology or follow-up neuroimaging studies. SPECT and (1)H-MRS showed concordant residue or recurrence in 9/24 (37.5%) patients. Both were true negative in 6/24 (25%) patients. SPECT and (1)H-MRS disagreed in 9 recurrences [7/9 (77.8%) and 2/9 (22.2%) were true positive by SPECT and (1)H-MRS, respectively]. Sensitivity of SPECT and (1)H-MRS in detecting recurrence was 88.8 and 61.1% with accuracies of 91.6 and 70.8%, respectively. A positive association between the delayed L/N ratio and tumor grade was found; the higher the grade, the higher is the L/N ratio (r = 0.62, P = 0.001). Tc-99m (V) DMSA brain SPECT is more accurate compared to (1)H-MRS for the detection of tumor residual tissues or recurrence in glioma patients with previous radiotherapy. It allows early and non-invasive differentiation of residual tumor or recurrence from irradiation necrosis.

  16. Monitoring and forecasting of hazardous hydrometeorological phenomena on the basis of conjuctive use of remote sensing data and the results of numerical modeling

    NASA Astrophysics Data System (ADS)

    Voronov, Nikolai; Dikinis, Alexandr

    2015-04-01

    Modern technologies of remote sensing (RS) open wide opportunities for monitoring and increasing the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. The RS data do not supersede ground-based observations, but they allow to solve new problems in the area of hydrological and meteorological monitoring and forecasting. In particular, the data of satellite, aviation or radar observations may be used for increasing of special-temporal discreteness of hydrometeorological observations. Besides, what seems very promising is conjunctive use of the data of remote sensing, ground-based observations and the "output" of hydrodynamical weather models, which allows to increase significantly the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. Modern technologies of monitoring and forecasting of hazardous of hazardous hydrometeorological phenomena on the basis of conjunctive use of the data of satellite, aviation and ground-based observations, as well as the output data of hydrodynamical weather models are considered. It is noted that an important and promising method of monitoring is bioindication - surveillance over response of the biota to external influence and behavior of animals that are able to be presentient of convulsions of nature. Implement of the described approaches allows to reduce significantly both the damage caused by certain hazardous hydrological and meteorological phenomena and the general level of hydrometeorological vulnerability of certain different-purpose objects and the RF economy as a whole.

  17. Large deflection of clamped circular plate and accuracy of its approximate analytical solutions

    NASA Astrophysics Data System (ADS)

    Zhang, Yin

    2016-02-01

    A different set of governing equations on the large deflection of plates are derived by the principle of virtual work (PVW), which also leads to a different set of boundary conditions. Boundary conditions play an important role in determining the computation accuracy of the large deflection of plates. Our boundary conditions are shown to be more appropriate by analyzing their difference with the previous ones. The accuracy of approximate analytical solutions is important to the bulge/blister tests and the application of various sensors with the plate structure. Different approximate analytical solutions are presented and their accuracies are evaluated by comparing them with the numerical results. The error sources are also analyzed. A new approximate analytical solution is proposed and shown to have a better approximation. The approximate analytical solution offers a much simpler and more direct framework to study the plate-membrane transition behavior of deflection as compared with the previous approaches of complex numerical integration.

  18. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  19. CHARMS: The Cryogenic, High-Accuracy Refraction Measuring System

    NASA Technical Reports Server (NTRS)

    Frey, Bradley; Leviton, Douglas

    2004-01-01

    The success of numerous upcoming NASA infrared (IR) missions will rely critically on accurate knowledge of the IR refractive indices of their constituent optical components at design operating temperatures. To satisfy the demand for such data, we have built a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS), which, for typical 1R materials. can measure the index of refraction accurate to (+ or -) 5 x 10sup -3 . This versatile, one-of-a-kind facility can also measure refractive index over a wide range of wavelengths, from 0.105 um in the far-ultraviolet to 6 um in the IR, and over a wide range of temperatures, from 10 K to 100 degrees C, all with comparable accuracies. We first summarize the technical challenges we faced and engineering solutions we developed during the construction of CHARMS. Next we present our "first light," index of refraction data for fused silica and compare our data to previously published results.

  20. The effects of boundary conditions on the steady-state response of three hypothetical ground-water systems; results and implications of numerical experiments

    USGS Publications Warehouse

    Franke, O. Lehn; Reilly, Thomas E.

    1987-01-01

    The most critical and difficult aspect of defining a groundwater system or problem for conceptual analysis or numerical simulation is the selection of boundary conditions . This report demonstrates the effects of different boundary conditions on the steady-state response of otherwise similar ground-water systems to a pumping stress. Three series of numerical experiments illustrate the behavior of three hypothetical groundwater systems that are rectangular sand prisms with the same dimensions but with different combinations of constant-head, specified-head, no-flow, and constant-flux boundary conditions. In the first series of numerical experiments, the heads and flows in all three systems are identical, as are the hydraulic conductivity and system geometry . However, when the systems are subjected to an equal stress by a pumping well in the third series, each differs significantly in its response . The highest heads (smallest drawdowns) and flows occur in the systems most constrained by constant- or specified-head boundaries. These and other observations described herein are important in steady-state calibration, which is an integral part of simulating many ground-water systems. Because the effects of boundary conditions on model response often become evident only when the system is stressed, a close match between the potential distribution in the model and that in the unstressed natural system does not guarantee that the model boundary conditions correctly represent those in the natural system . In conclusion, the boundary conditions that are selected for simulation of a ground-water system are fundamentally important to groundwater systems analysis and warrant continual reevaluation and modification as investigation proceeds and new information and understanding are acquired.

  1. A Simple Strategy to Mitigate the Aliasing Effect in X-band Marine Radar Data: Numerical Results for a 2D Case

    PubMed Central

    Serafino, Francesco; Lugni, Claudio; Nieto Borge, Josè Carlos; Soldovieri, Francesco

    2011-01-01

    For moderate and high speed values of the sea surface current, an aliasing phenomenon, due to an under-sampling in the time-domain, can strongly affect the reconstruction of the sea surface elevation derived from X-band radar images. Here, we propose a de-aliasing strategy that exploits the physical information provided by the dispersion law for gravity waves. In particular, we utilize simplifying hypotheses and numerical tests with synthetic data are presented to demonstrate the effectiveness of the presented method. PMID:22346616

  2. A numerical method for interface problems in elastodynamics

    NASA Technical Reports Server (NTRS)

    Mcghee, D. S.

    1984-01-01

    The numerical implementation of a formulation for a class of interface problems in elastodynamics is discussed. This formulation combines the use of the finite element and boundary integral methods to represent the interior and the exteriro regions, respectively. In particular, the response of a semicylindrical alluvial valley in a homogeneous halfspace to incident antiplane SH waves is considered to determine the accuracy and convergence of the numerical procedure. Numerical results are obtained from several combinations of the incidence angle, frequency of excitation, and relative stiffness between the inclusion and the surrounding halfspace. The results tend to confirm the theoretical estimates that the convergence is of the order H(2) for the piecewise linear elements used. It was also observed that the accuracy descreases as the frequency of excitation increases or as the relative stiffness of the inclusion decreases.

  3. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  4. A novel method to decrease electric field and SAR using an external high dielectric sleeve at 3 T head MRI: numerical and experimental results.

    PubMed

    Park, Bu S; Rajan, Sunder S; Guag, Joshua W; Angelone, Leonardo M

    2015-04-01

    Materials with high dielectric constant (HDC) have been used in high field MRI to decrease specific absorption rate (SAR), increase magnetic field intensity, and increase signal-to-noise ratio. In previous studies, the HDC materials were placed inside the RF coil decreasing the space available. This study describes an alternative approach that considers an HDC-based sleeve placed outside the RF coil. The effects of an HDC on the electromagnetic (EM) field were studied using numerical simulations with a coil unloaded and loaded with a human head model. In addition, experimental EM measurements at 128 MHz were performed inside a custom-made head coil, fitted with a distilled water sleeve. The numerical simulations showed up to 40% decrease in maximum 10 g-avg. SAR on the surface of the head model with an HDC material of barium titanate. Experimental measurements also showed up to 20% decrease of maximum electric field using an HDC material of distilled water. The proposed method can be incorporated in the design of high field transmit RF coils.

  5. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  6. Hindi Numerals.

    ERIC Educational Resources Information Center

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  7. Numerical experiments in homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Rogallo, R. S.

    1981-01-01

    The direct simulation methods developed by Orszag and Patternson (1972) for isotropic turbulence were extended to homogeneous turbulence in an incompressible fluid subjected to uniform deformation or rotation. The results of simulations for irrotational strain (plane and axisymmetric), shear, rotation, and relaxation toward isotropy following axisymmetric strain are compared with linear theory and experimental data. Emphasis is placed on the shear flow because of its importance and because of the availability of accurate and detailed experimental data. The computed results are used to assess the accuracy of two popular models used in the closure of the Reynolds-stress equations. Data from a variety of the computed fields and the details of the numerical methods used in the simulation are also presented.

  8. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  9. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  10. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  11. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  12. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  13. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  14. Numerical processing efficiency improved in children using mental abacus: ERP evidence utilizing a numerical Stroop task

    PubMed Central

    Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan

    2015-01-01

    This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012

  15. Numerical discrimination is mediated by neural coding variation.

    PubMed

    Prather, Richard W

    2014-12-01

    One foundation of numerical cognition is that discrimination accuracy depends on the proportional difference between compared values, closely following the Weber-Fechner discrimination law. Performance in non-symbolic numerical discrimination is used to calculate individual Weber fraction, a measure of relative acuity of the approximate number system (ANS). Individual Weber fraction is linked to symbolic arithmetic skills and long-term educational and economic outcomes. The present findings suggest that numerical discrimination performance depends on both the proportional difference and absolute value, deviating from the Weber-Fechner law. The effect of absolute value is predicted via computational model based on the neural correlates of numerical perception. Specifically, that the neural coding "noise" varies across corresponding numerosities. A computational model using firing rate variation based on neural data demonstrates a significant interaction between ratio difference and absolute value in predicting numerical discriminability. We find that both behavioral and computational data show an interaction between ratio difference and absolute value on numerical discrimination accuracy. These results further suggest a reexamination of the mechanisms involved in non-symbolic numerical discrimination, how researchers may measure individual performance, and what outcomes performance may predict.

  16. Numerical impact simulation of gradually increased kinetic energy transfer has the potential to break up folded protein structures resulting in cytotoxic brain tissue edema.

    PubMed

    von Holst, Hans; Li, Xiaogai

    2013-07-01

    Although the consequences of traumatic brain injury (TBI) and its treatment have been improved, there is still a substantial lack of understanding the mechanisms. Numerical simulation of the impact can throw further lights on site and mechanism of action. A finite element model of the human head and brain tissue was used to simulate TBI. The consequences of gradually increased kinetic energy transfer was analyzed by evaluating the impact intracranial pressure (ICP), strain level, and their potential influences on binding forces in folded protein structures. The gradually increased kinetic energy was found to have the potential to break apart bonds of Van der Waals in all impacts and hydrogen bonds at simulated impacts from 6 m/s and higher, thereby superseding the energy in folded protein structures. Further, impacts below 6 m/s showed none or very slight increase in impact ICP and strain levels, whereas impacts of 6 m/s or higher showed a gradual increase of the impact ICP and strain levels reaching over 1000 KPa and over 30%, respectively. The present simulation study shows that the free kinetic energy transfer, impact ICP, and strain levels all have the potential to initiate cytotoxic brain tissue edema by unfolding protein structures. The definition of mild, moderate, and severe TBI should thus be looked upon as the same condition and separated only by a gradual severity of impact.

  17. Simultaneous Raman-Rayleigh-LIF Measurements and Numerical Modeling Results of a Lifted H2/N2 Turbulent Jet Flame in a Vitiated Coflow

    NASA Technical Reports Server (NTRS)

    Cabra, R.; Chen, J. Y.; Dibble, R. W.; Hamano, Y.; Karpetis, A. N.; Barlow, R. S.

    2002-01-01

    An experimental and numerical investigation is presented of a H2/N2 turbulent jet flame burner that has a novel vitiated coflow. The vitiated coflow emulates the recirculation region of most combustors, such as gas turbines or furnaces. Additionally, since the vitiated gases are coflowing, the burner allows for exploration of recirculation chemistry without the corresponding fluid mechanics of recirculation. Thus the vitiated coflow burner design facilitates the development of chemical kinetic combustion models without the added complexity of recirculation fluid mechanics. Scalar measurements are reported for a turbulent jet flame of H2/N2 in a coflow of combustion products from a lean ((empty set) = 0.25) H2/Air flame. The combination of laser-induced fluorescence, Rayleigh scattering, and Raman scattering is used to obtain simultaneous measurements of the temperature, major species, as well as OH and NO. Laminar flame calculation with equal diffusivity do agree when the premixing and preheating that occurs prior to flame stabilization is accounted for in the boundary conditions. Also presented is an exploratory pdf model that predicts the flame's axial profiles fairly well, but does not accurately predict the lift-off height.

  18. Numerical simulation for fan broadband noise prediction

    NASA Astrophysics Data System (ADS)

    Hase, Takaaki; Yamasaki, Nobuhiko; Ooishi, Tsutomu

    2011-03-01

    In order to elucidate the broadband noise of fan, the numerical simulation of fan operating at two different rotational speeds is carried out using the three-dimensional unsteady Reynolds-averaged Navier-Stokes (URANS) equations. The computed results are compared to experiment to estimate its accuracy and are found to show good agreement with experiment. A method is proposed to evaluate the turbulent kinetic energy in the framework of the Spalart-Allmaras one equation turbulence model. From the calculation results, the turbulent kinetic energy is visualized as the turbulence of the flow which leads to generate the broadband noise, and its noise sources are identified.

  19. Numerical Analysis Using WRF-SBM for the Cloud Microphysical Structures in the C3VP Field Campaign: Impacts of Supercooled Droplets and Resultant Riming on Snow Microphysics

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Matsui, Toshihisa; Shi, Jainn J.; Tao, Wei-Kuo; Khain, Alexander P.; Hao, Arthur; Cifelli, Robert; Heymsfield, Andrew; Tokay, Ali

    2012-01-01

    Two distinct snowfall events are observed over the region near the Great Lakes during 19-23 January 2007 under the intensive measurement campaign of the Canadian CloudSat/CALIPSO validation project (C3VP). These events are numerically investigated using the Weather Research and Forecasting model coupled with a spectral bin microphysics (WRF-SBM) scheme that allows a smooth calculation of riming process by predicting the rimed mass fraction on snow aggregates. The fundamental structures of the observed two snowfall systems are distinctly characterized by a localized intense lake-effect snowstorm in one case and a widely distributed moderate snowfall by the synoptic-scale system in another case. Furthermore, the observed microphysical structures are distinguished by differences in bulk density of solid-phase particles, which are probably linked to the presence or absence of supercooled droplets. The WRF-SBM coupled with Goddard Satellite Data Simulator Unit (G-SDSU) has successfully simulated these distinctive structures in the three-dimensional weather prediction run with a horizontal resolution of 1 km. In particular, riming on snow aggregates by supercooled droplets is considered to be of importance in reproducing the specialized microphysical structures in the case studies. Additional sensitivity tests for the lake-effect snowstorm case are conducted utilizing different planetary boundary layer (PBL) models or the same SBM but without the riming process. The PBL process has a large impact on determining the cloud microphysical structure of the lake-effect snowstorm as well as the surface precipitation pattern, whereas the riming process has little influence on the surface precipitation because of the small height of the system.

  20. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  1. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  2. Accuracy of a new patch pump based on a microelectromechanical system (MEMS) compared to other commercially available insulin pumps: results of the first in vitro and in vivo studies.

    PubMed

    Borot, Sophie; Franc, Sylvia; Cristante, Justine; Penfornis, Alfred; Benhamou, Pierre-Yves; Guerci, Bruno; Hanaire, Hélène; Renard, Eric; Reznik, Yves; Simon, Chantal; Charpentier, Guillaume

    2014-11-01

    The JewelPUMP™ (JP) is a new patch pump based on a microelectromechanical system that operates without any plunger. The study aimed to evaluate the infusion accuracy of the JP in vitro and in vivo. For the in vitro studies, commercially available pumps meeting the ISO standard were compared to the JP: the MiniMed® Paradigm® 712 (MP), Accu-Chek® Combo (AC), OmniPod® (OP), Animas® Vibe™ (AN). Pump accuracy was measured over 24 hours using a continuous microweighing method, at 0.1 and 1 IU/h basal rates. The occlusion alarm threshold was measured after a catheter occlusion. The JP, filled with physiological serum, was then tested in 13 patients with type 1 diabetes simultaneously with their own pump for 2 days. The weight difference was used to calculate the infused insulin volume. The JP showed reduced absolute median error rate in vitro over a 15-minute observation window compared to other pumps (1 IU/h): ±1.02% (JP) vs ±1.60% (AN), ±1.66% (AC), ±2.22% (MP), and ±4.63% (OP), P < .0001. But there was no difference over 24 hours. At 0.5 IU/h, the JP was able to detect an occlusion earlier than other pumps: 21 (19; 25) minutes vs 90 (85; 95), 58 (42; 74), and 143 (132; 218) minutes (AN, AC, MP), P < .05 vs AN and MP. In patients, the 24-hour flow error was not significantly different between the JP and usual pumps (-2.2 ± 5.6% vs -0.37 ± 4.0%, P = .25). The JP was found to be easier to wear than conventional pumps. The JP is more precise over a short time period, more sensitive to catheter occlusion, well accepted by patients, and consequently, of potential interest for a closed-loop insulin delivery system.

  3. Parametric Characterization of SGP4 Theory and TLE Positional Accuracy

    NASA Astrophysics Data System (ADS)

    Oltrogge, D.; Ramrath, J.

    2014-09-01

    Two-Line Elements, or TLEs, contain mean element state vectors compatible with General Perturbations (GP) singly-averaged semi-analytic orbit theory. This theory, embodied in the SGP4 orbit propagator, provides sufficient accuracy for some (but perhaps not all) orbit operations and SSA tasks. For more demanding tasks, higher accuracy orbit and force model approaches (i.e. Special Perturbations numerical integration or SP) may be required. In recent times, the suitability of TLEs or GP theory for any SSA analysis has been increasingly questioned. Meanwhile, SP is touted as being of high quality and well-suited for most, if not all, SSA applications. Yet the lack of truth or well-known reference orbits that haven't already been adopted for radar and optical sensor network calibration has typically prevented a truly unbiased assessment of such assertions. To gain better insight into the practical limits of applicability for TLEs, SGP4 and the underlying GP theory, the native SGP4 accuracy is parametrically examined for the statistically-significant range of RSO orbit inclinations experienced as a function of all orbit altitudes from LEO through GEO disposal altitude. For each orbit altitude, reference or truth orbits were generated using full force modeling, time-varying space weather, and AGIs HPOP numerical integration orbit propagator. Then, TLEs were optimally fit to these truth orbits. The resulting TLEs were then propagated and positionally differenced with the truth orbits to determine how well the GP theory was able to fit the truth orbits. Resultant statistics characterizing these empirically-derived accuracies are provided. This TLE fit process of truth orbits was intentionally designed to be similar to the JSpOC process operationally used to generate Enhanced GP TLEs for debris objects. This allows us to draw additional conclusions of the expected accuracies of EGP TLEs. In the real world, Orbit Determination (OD) programs aren't provided with dense optical

  4. Numerical estimation of densities

    NASA Astrophysics Data System (ADS)

    Ascasibar, Y.; Binney, J.

    2005-01-01

    We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.

  5. Analysis of observations and results of numerical modeling of meteorological parameters and atmospheric air pollution under weak wind conditions in the city of Tomsk

    NASA Astrophysics Data System (ADS)

    Starchenko, Alexander V.; Bart, Andrey A.; Kizhner, Lyubov I.; Barashkova, Nadezhda K.; Volkova, Marina A.; Zhuravlev, Georgi G.; Kuzhevskaya, Irina V.; Terenteva, Maria V.

    2015-11-01

    The results of calculation of meteorological parameters using a meteorological model, TSU-NM3, as well as prediction of some indices of atmospheric air pollution in the city of Tomsk obtained from a mesoscale photochemical model are presented. The calculation results are compared with observational data on the atmosphere and pollutants.

  6. On estimating gravity anomalies from gradiometer data. [by numerical analysis

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1976-01-01

    The Gravsat-gradiometer mission involves flying a gradiometer on a gravity satellite (Gravsat) which is in a low, polar, and circular orbit. Results are presented of a numerical simulation of the mission which demonstrates that, if the satellite is in a 250-km orbit, 3- and 5-degree gravity anomalies may be estimated with accuracies of 0.03 and 0.01 mm/square second (3 and 1 mgal), respectively. At an altitude of 350 km, the results are 0.07 and 0.025 mm.square second (7 and 2.5 mgal), respectively. These results assume a rotating type gradiometer with a 0.1 -etvos unit accuracy. The results can readily be scaled to reflect another accuracy level.

  7. Evaluation of registration accuracy between Sentinel-2 and Landsat 8

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia

    2016-08-01

    Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).

  8. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  9. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  10. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  11. Overview of existing algorithms for emotion classification. Uncertainties in evaluations of accuracies.

    NASA Astrophysics Data System (ADS)

    Avetisyan, H.; Bruna, O.; Holub, J.

    2016-11-01

    A numerous techniques and algorithms are dedicated to extract emotions from input data. In our investigation it was stated that emotion-detection approaches can be classified into 3 following types: Keyword based / lexical-based, learning based, and hybrid. The most commonly used techniques, such as keyword-spotting method, Support Vector Machines, Naïve Bayes Classifier, Hidden Markov Model and hybrid algorithms, have impressive results in this sphere and can reach more than 90% determining accuracy.

  12. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  13. Effect of the thickness variation and initial imperfection on buckling of composite cylindrical shells: Asymptotic analysis and numerical results by BOSOR4 and PANDA2

    NASA Technical Reports Server (NTRS)

    Li, Yi-Wei; Elishakoff, Isaac; Starnes, James H., Jr.; Bushnell, David

    1998-01-01

    This study is an extension of a previous investigation of the combined effect of axisymmetric thickness variation and axisymmetric initial geometric imperfection on buckling of isotropic shells under uniform axial compression. Here the anisotropic cylindrical shells are investigated by means of Koiter's energy criterion. An asymptotic formula is derived which can be used to determine the critical buckling load for composite shells with combined initial geometric imperfection and thickness variation. Results are compared with those obtained by the software packages BOSOR4 and PANDA2.

  14. Experimental and numerical investigations of internal heat transfer in an innovative trailing edge blade cooling system: stationary and rotation effects, part 1—experimental results

    NASA Astrophysics Data System (ADS)

    Beniaiche, Ahmed; Ghenaiet, Adel; Facchini, Bruno

    2017-02-01

    The aero-thermal behavior of the flow field inside 30:1 scaled model reproducing an innovative smooth trailing edge of shaped wedge discharge duct with one row of enlarged pedestals have been investigated in order to determine the effect of rotation, inlet velocity and blowing conditions effects, for Re = 20,000 and 40,000 and Ro = 0-0.23. Two configurations are presented: with and without open tip configurations. Thermo-chromic liquid crystals technique is used to ensure a local measurement of the heat transfer coefficient on the blade suction side under stationary and rotation conditions. Results are reported in terms of detailed 2D HTC maps on the suction side surface as well as the averaged Nusselt number inside the pedestal ducts. Two correlations are proposed, for both closed and open tip configurations, based on the Re, Pr, Ro and a new non-dimensional parameter based on the position along the radial distance, to assess a reliable estimation of the averaged Nusselt number at the inter-pedestal region. A good agreement is found between prediction and experimental data with about ±10 to ±12 % of uncertainty, for the simple form correlation, and about ±16 % using a complex form. The obtained results help to predict the flow field visualization and the evaluation of the aero-thermal performance of the studied blade cooling system during the design step.

  15. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  16. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  17. Numerical Optimization

    DTIC Science & Technology

    1992-12-01

    fisica matematica . ABSTRACT - We consider a new method for the numerical solution both of non- linear systems of equations and of cornplementauity... Matematica , Serie V11 Volume 9 , Roma (1989), 521-543 An Inexact Continuous Method for the Solution of Large Systems of Equations and Complementarity...34 - 00185 Roma - Italy APPENDIX 2 A Quadratically Convergent Method for Unear Programming’ Stefano Herzel Dipartimento di Matematica -G. Castelnuovo

  18. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Leimkuhler, Benedict; Shang, Xiaocheng

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.

  19. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  20. The role of indigenous traditional counting systems in children's development of numerical cognition: results from a study in Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Matang, Rex A. S.; Owens, Kay

    2014-09-01

    The Government of Papua New Guinea undertook a significant step in developing curriculum reform policy that promoted the use of Indigenous knowledge systems in teaching formal school subjects in any of the country's 800-plus Indigenous languages. The implementation of the Elementary Cultural Mathematics Syllabus is in line with the above curriculum emphasis. Given the aims of the reform, the research reported here investigated the influence of children's own mother tongue (Tok Ples) and traditional counting systems on their development of early number knowledge formally taught in schools. The study involved 272 school children from 22 elementary schools in four provinces. Each child participated in a task-based assessment interview focusing on eight task groups relating to early number knowledge. The results obtained indicate that, on average, children learning their traditional counting systems in their own language spent shorter time and made fewer mistakes in solving each task compared to those taught without Tok Ples (using English and/or the lingua franca, Tok Pisin). Possible reasons accounting for these differences are also discussed.

  1. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    NASA Astrophysics Data System (ADS)

    Bordogna, Clelia María; Albano, Ezequiel V.

    2007-02-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.

  2. Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1976-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  3. Numerical simulation of wall-bounded turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Moin, P.

    1982-01-01

    Developments in three dimensional, time dependent numerical simulation of turbulent flows bounded by a wall are reviewed. Both direct and large eddy simulation techniques are considered within the same computational framework. The computational spatial grid requirements as dictated by the known structure of turbulent boundary layers are presented. The numerical methods currently in use are reviewed and some of the features of these algorithms, including spatial differencing and accuracy, time advancement, and data management are discussed. A selection of the results of the recent calculations of turbulent channel flow, including the effects of system rotation and transpiration on the flow are included. Previously announced in STAR as N82-28577

  4. Numerical simulation of wall-bounded turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Moin, P.

    1982-01-01

    Developments in three dimensional, time dependent numerical simulation of turbulent flows bounded by a wall are reviewed. Both direct and large eddy simulation techniques are considered within the same computational framework. The computational spatial grid requirements as dictated by the known structure of turbulent boundary layers are presented. The numerical methods currently in use are reviewed and some of the features of these algorithms, including spatial differencing and accuracy, time advancement, and data management are discussed. A selection of the results of the recent calculations of turbulent channel flow, including the effects of system rotation and transpiration on the flow are included.

  5. Numerical Integral of Resistance Coefficients in Diffusion

    NASA Astrophysics Data System (ADS)

    Zhang, Q. S.

    2017-01-01

    The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e., ∼10‑12). The results of collision integrals and their derivatives for ‑7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10‑10. For very weakly coupled plasma (ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10‑11. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’ for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.

  6. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  7. Entropy Splitting and Numerical Dissipation

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Vinokur, M.; Djomehri, M. J.

    1999-01-01

    A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock

  8. Numerical Integration: One Step at a Time

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…

  9. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  10. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  11. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  12. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  13. Accuracy assessment/validation methodology and results of 2010–11 land-cover/land-use data for Pools 13, 26, La Grange, and Open River South, Upper Mississippi River System

    USGS Publications Warehouse

    Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.

    2016-01-11

    Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.

  14. Equivalent beam modeling using numerical reduction techniques

    NASA Technical Reports Server (NTRS)

    Chapman, J. M.; Shaw, F. H.

    1987-01-01

    Numerical procedures that can accomplish model reductions for space trusses were developed. Three techniques are presented that can be implemented using current capabilities within NASTRAN. The proposed techniques accomplish their model reductions numerically through use of NASTRAN structural analyses and as such are termed numerical in contrast to the previously developed analytical techniques. Numerical procedures are developed that permit reductions of large truss models containing full modeling detail of the truss and its joints. Three techniques are presented that accomplish these model reductions with various levels of structural accuracy. These numerical techniques are designated as equivalent beam, truss element reduction, and post-assembly reduction methods. These techniques are discussed in detail.

  15. Accuracy of an estuarine hydrodynamic model using smooth elements

    USGS Publications Warehouse

    Walters, Roy A.; Cheng, Ralph T.

    1980-01-01

    A finite element model which uses triangular, isoparametric elements with quadratic basis functions for the two velocity components and linear basis functions for water surface elevation is used in the computation of shallow water wave motions. Specifically addressed are two common uncertainties in this class of two-dimensional hydrodynamic models: the treatment of the boundary conditions at open boundaries and the treatment of lateral boundary conditions. The accuracy of the models is tested with a set of numerical experiments in rectangular and curvilinear channels with constant and variable depth. The results indicate that errors in velocity at the open boundary can be significant when boundary conditions for water surface elevation are specified. Methods are suggested for minimizing these errors. The results also show that continuity is better maintained within the spatial domain of interest when ‘smooth’ curve-sided elements are used at shoreline boundaries than when piecewise linear boundaries are used. Finally, a method for network development is described which is based upon a continuity criterion to gauge accuracy. A finite element network for San Francisco Bay, California, is used as an example.

  16. A numerical study of nonstationary plasma and projectile motion in a rail gun

    NASA Astrophysics Data System (ADS)

    Zvezdin, A. M.; Kovalev, V. L.

    1992-10-01

    Changes in plasma parameters and projectile velocity and acceleration in a rail gun during the launch are investigated numerically. The method involves determining the velocity and magnetic induction using a difference scheme and an explicit nonlinear method with flow correction for calculating plasma density. The accuracy of the method proposed here is demonstrated by comparing the results with data in the literature.

  17. Numerical relativity and spectral methods

    NASA Astrophysics Data System (ADS)

    Grandclement, P.

    2016-12-01

    The term numerical relativity denotes the various techniques that aim at solving Einstein's equations using computers. Those computations can be divided into two families: temporal evolutions on the one hand and stationary or periodic solutions on the other one. After a brief presentation of those two classes of problems, I will introduce a numerical tool designed to solve Einstein's equations: the KADATH library. It is based on the the use of spectral methods that can reach high accuracy with moderate computational resources. I will present some applications about quasicircular orbits of black holes and boson star configurations.

  18. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  19. Direct numerical simulation of scalar transport using unstructured finite-volume schemes

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo

    2009-03-01

    An unstructured finite-volume method for direct and large-eddy simulations of scalar transport in complex geometries is presented and investigated. The numerical technique is based on a three-level fully implicit time advancement scheme and central spatial interpolation operators. The scalar variable at cell faces is obtained by a symmetric central interpolation scheme, which is formally first-order accurate, or by further employing a high-order correction term which leads to formal second-order accuracy irrespective of the underlying grid. In this framework, deferred-correction and slope-limiter techniques are introduced in order to avoid numerical instabilities in the resulting algebraic transport equation. The accuracy and robustness of the code are initially evaluated by means of basic numerical experiments where the flow field is assigned a priori. A direct numerical simulation of turbulent scalar transport in a channel flow is finally performed to validate the numerical technique against a numerical dataset established by a spectral method. In spite of the linear character of the scalar transport equation, the computed statistics and spectra of the scalar field are found to be significantly affected by the spectral-properties of interpolation schemes. Although the results show an improved spectral-resolution and greater spatial-accuracy for the high-order operator in the analysis of basic scalar transport problems, the low-order central scheme is found superior for high-fidelity simulations of turbulent scalar transport.

  20. Numerical flow analysis for axial flow turbine

    NASA Astrophysics Data System (ADS)

    Sato, T.; Aoki, S.

    Some numerical flow analysis methods adopted in the gas turbine interactive design system, TDSYS, are described. In the TDSYS, a streamline curvature program for axisymmetric flows, quasi 3-D and fully 3-D time marching programs are used respectively for blade to blade flows and annular cascade flows. The streamline curvature method has some advantages in that it can include the effect of coolant mixing and choking flow conditions. Comparison of the experimental results with calculated results shows that the overall accuracy is determined more by the empirical correlations used for loss and deviation than by the numerical scheme. The time marching methods are the best choice for the analysis of turbine cascade flows because they can handle mixed subsonic-supersonic flows with automatic inclusion of shock waves in a single calculation. Some experimental results show that a time marching method can predict the airfoil surface Mach number distribution more accurately than a finite difference method. One weakpoint of the time marching methods is a long computer time; they usually require several times as much CPU time as other methods. But reductions in computer costs and improvements in numerical methods have made the quasi 3-D and fully 3-D time marching methods usable as design tools, and they are now used in TDSYS.

  1. Numerical Lamb shift calculations for low-Z systems

    NASA Astrophysics Data System (ADS)

    Jentschura, U. D.; Mohr, P. J.; Soff, G.

    1999-01-01

    For bound systems with a small atomic number Z, numerical evaluations of self-energy corrections, which are non-perturbative in the binding field, entail severe numerical cancellations at intermediate stages of the calculation. This paper reports on a result for the non-perturbative self-energy remainder function GSE in atomic hydrogen with a relative accuracy of 10-5. We discuss consistency checks on the results of numerical Lamb shift calculations in systems with a small atomic number. The precise determination of radiative corrections in low-Z bound systems is of crucial importance for the interpretation of precision measurements in atoms, for tests of quantum electrodynamics and for the determination of fundamental constants.

  2. Numerical Study of Laminar Flow over Acoustic Cavities

    NASA Astrophysics Data System (ADS)

    Owen, Matthew; Cheng, Gary

    2016-11-01

    Fluid flow over an open cavity often emits acoustic waves with certain natural frequencies dependent on the geometry of the cavity and the properties and flow conditions of the fluid. Numerical studies of this kind, Computational Aeroacoustics (CAA), pose a grave challenge to the accuracy and efficiency of numerical methods. This project examines the Space-Time Conservation Element Solution Element (CESE) method developed by Dr. S.C. Chang at NASA GRC and compares numerical results of two-dimensional flow to previous experimental data found in literature. The conclusion the project reached is that the test data agrees well with one of the modes of the predicted frequencies, and that further testing is needed to be able to match experimental results. Funding from NSF REU site Grant EEC 1358991 is greatly appreciated.

  3. Accuracy enhancement of wideband complex permittivity measured by an open-ended coaxial probe

    NASA Astrophysics Data System (ADS)

    Jung, Ji-Hyun; Cho, Jae-Hyoung; Kim, Se-Yun

    2016-01-01

    When the wideband complex permittivity of a liquid solution was measured by an open-ended coaxial probe, its accuracy was inherently degraded due to an inexact conversion model compared with the uncertainty of the associated vector network analyzer. In this paper, the accuracy of the converted wideband complex permittivity is evaluated indirectly and then enhanced significantly. Firstly, the measured wideband complex permittivity is fitted by a dispersive permittivity profile in the Debye formula. Secondly, the new reflection coefficients are calculated numerically by applying the dispersive permittivity profile to a 2D finite-difference time-domain model of our probe. Thirdly, the inaccuracy of the fitted complex permittivity profile is quantified indirectly by the root mean square (RMS) error of the numerically calculated reflection coefficients in comparison with the originally measured reflection coefficients. Finally, four parameters of the Debye formula involved in the initially converted complex permittivity profile were calibrated iteratively until the RMS error of the repeatedly calculated reflection coefficients could be minimized. The validity of the above calibration procedure is assured numerically for a given wideband complex permittivity profile. In the case of our actual measurement of the fabricated cancer-equivalent solution, an unknown complex permittivity profile, the RMS error of the numerically calculated reflection coefficients could be reduced from 0.0416 to 0.0093. This calibration results in both relative dielectric constant and conductivity profiles being gradually enhanced as the frequency increases up to 5000 MHz.

  4. High Accuracy Time Transfer Synchronization

    DTIC Science & Technology

    1994-12-01

    HIGH ACCURACY TIME TRANSFER SYNCHRONIZATION Paul Wheeler, Paul Koppang, David Chalmers, Angela Davis, Anthony Kubik and William Powell U.S. Naval...Observatory Washington, DC 20392 Abstract In July 1994, the US Naval Observatory (USNO) Time Service System Engineering Division conducted a...field test to establish a baseline accuracy for two-way satellite time transfer synchro- nization. Three Hewlett-Packard model 5071 high performance

  5. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  6. Size-Dependent Accuracy of Nanoscale Thermometers.

    PubMed

    Alicki, Robert; Leitner, David M

    2015-07-23

    The accuracy of two classes of nanoscale thermometers is estimated in terms of size and system-dependent properties using the spin-boson model. We consider solid state thermometers, where the energy splitting is tuned by thermal properties of the material, and fluorescent organic thermometers, in which the fluorescence intensity depends on the thermal population of conformational states of the thermometer. The results of the theoretical model compare well with the accuracy reported for several nanothermometers that have been used to measure local temperature inside living cells.

  7. The measurement accuracy of passive radon instruments.

    PubMed

    Beck, T R; Foerster, E; Buchröder, H; Schmidt, V; Döring, J

    2014-01-01

    This paper analyses the data having been gathered from interlaboratory comparisons of passive radon instruments over 10 y with respect to the measurement accuracy. The measurement accuracy is discussed in terms of the systematic and the random measurement error. The analysis shows that the systematic measurement error of the most instruments issued by professional laboratory services can be within a range of ±10 % from the true value. A single radon measurement has an additional random measurement error, which is in the range of up to ±15 % for high exposures to radon (>2000 kBq h m(-3)). The random measurement error increases for lower exposures. The analysis especially applies to instruments with solid-state nuclear track detectors and results in proposing criteria for testing the measurement accuracy. Instruments with electrets and charcoal have also been considered, but the low stock of data enables only a qualitative discussion.

  8. Classification accuracy of actuarial risk assessment instruments.

    PubMed

    Neller, Daniel J; Frederick, Richard I

    2013-01-01

    Users of commonly employed actuarial risk assessment instruments (ARAIs) hope to generate numerical probability statements about risk; however, ARAI manuals often do not explicitly report data that are essential for understanding the classification accuracy of the instruments. In addition, ARAI manuals often contain data that have the potential for misinterpretation. The authors of the present article address the accurate generation of probability statements. First, they illustrate how the reporting of numerical probability statements based on proportions rather than predictive values can mislead users of ARAIs. Next, they report essential test characteristics that, to date, have gone largely unreported in ARAI manuals. Then they discuss a graphing method that can enhance the practice of clinicians who communicate risk via numerical probability statements. After the authors review several strategies for selecting optimal cut-off scores, they show how the graphing method can be used to estimate positive predictive values for each cut-off score of commonly used ARAIs, across all possible base rates. They also show how the graphing method can be used to estimate base rates of violent recidivism in local samples.

  9. Numerical investigation of multi-element airfoils

    NASA Technical Reports Server (NTRS)

    Cummings, Russell M.

    1993-01-01

    The flow over multi-element airfoils with flat-plate lift-enhancing tabs was numerically investigated. Tabs ranging in height from 0.25 percent to 1.25 percent of the reference airfoil chord were studied near the trailing edge of the main-element. This two-dimensional numerical simulation employed an incompressible Navier-Stokes solver on a structured, embedded grid topology. New grid refinements were used to improve the accuracy of the solution near the overlapping grid boundaries. The effects of various tabs were studied at a constant Reynolds number on a two-element airfoil with a slotted flap. Both computed and measured results indicated that a tab in the main-element cove improved the maximum lift and lift-to-drag ratio relative to the baseline airfoil without a tab. Computed streamlines revealed that the additional turning caused by the tab may reduce the amount of separated flow on the flap. A three-element airfoil was also studied over a range of Reynolds numbers. For the optimized flap rigging, the computed and measured Reynolds number effects were similar. When the flap was moved from the optimum position, numerical results indicated that a tab may help to reoptimize the airfoil to within 1 percent of the optimum flap case.

  10. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  11. Two Different Methods for Numerical Solution of the Modified Burgers' Equation

    PubMed Central

    Karakoç, Seydi Battal Gazi; Başhan, Ali; Geyikli, Turabi

    2014-01-01

    A numerical solution of the modified Burgers' equation (MBE) is obtained by using quartic B-spline subdomain finite element method (SFEM) over which the nonlinear term is locally linearized and using quartic B-spline differential quadrature (QBDQM) method. The accuracy and efficiency of the methods are discussed by computing L 2 and L ∞ error norms. Comparisons are made with those of some earlier papers. The obtained numerical results show that the methods are effective numerical schemes to solve the MBE. A linear stability analysis, based on the von Neumann scheme, shows the SFEM is unconditionally stable. A rate of convergence analysis is also given for the DQM. PMID:25162064

  12. Aerothermal modeling program. Phase 2, element A: Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Mongia, H. C.; Patankar, Suhas V.; Runchal, A. K.

    1987-01-01

    The objective of this effort is to develop improved numerical schemes for predicting combustor flow fields. Various candidate numerical schemes were evaluated, and promising schemes were selected for detailed assessment. The criteria for evaluation included accuracy, computational efficiency, stability, and ease of extension to multidimensions. The candidate schemes were assessed against a variety of simple one- and two-dimensional problems. These results led to the selection of the following schemes for further evaluation: flux spline schemes (linear and cubic) and controlled numerical diffusion with internal feedback (CONDIF). The incorporation of the flux spline scheme and direct solution strategy in a computer program for three-dimensional flows is in progress.

  13. Two different methods for numerical solution of the modified Burgers' equation.

    PubMed

    Karakoç, Seydi Battal Gazi; Başhan, Ali; Geyikli, Turabi

    2014-01-01

    A numerical solution of the modified Burgers' equation (MBE) is obtained by using quartic B-spline subdomain finite element method (SFEM) over which the nonlinear term is locally linearized and using quartic B-spline differential quadrature (QBDQM) method. The accuracy and efficiency of the methods are discussed by computing L 2 and L ∞ error norms. Comparisons are made with those of some earlier papers. The obtained numerical results show that the methods are effective numerical schemes to solve the MBE. A linear stability analysis, based on the von Neumann scheme, shows the SFEM is unconditionally stable. A rate of convergence analysis is also given for the DQM.

  14. Numerical and analytic results showing the suppression of secondary electron emission from velvet and foam, and a geometric view factor model to guide the development of a surface to suppress SEE

    NASA Astrophysics Data System (ADS)

    Swanson, Charles; Kaganovich, I. D.

    2016-09-01

    The technique of suppressing secondary electron emission (SEE) from a surface by texturing it is developing rapidly in recent years. We have specific and general results in support of this technique: We have performed numerical and analytic calculations for determining the effective secondary electron yield (SEY) from velvet, which is an array of long cylinders on the micro-scale, and found velvet to be suitable for suppressing SEY from a normally incident primary distribution. We have performed numerical and analytic calculations also for metallic foams, which are an isotropic lattice of fibers on the micro-scale, and found foams to be suitable for suppressing SEY from an isotropic primary distribution. More generally, we have created a geometric weighted view factor model for determining the SEY suppression of a given surface geometry, which has optimization of SEY as a natural application. The optimal surface for suppressing SEY does not have finite area and has no smallest feature size, making it fractal in nature. This model gives simple criteria for a physical, non-fractal surface to suppress SEY. We found families of optimal surfaces to suppress SEY given a finite surface area. The research is supported by Air Force Office of Scientific Research (AFSOR).

  15. Effect of uniaxial stress on electroluminescence, valence band modification, optical gain, and polarization modes in tensile strained p-AlGaAs/GaAsP/n-AlGaAs laser diode structures: Numerical calculations and experimental results

    NASA Astrophysics Data System (ADS)

    Bogdanov, E. V.; Minina, N. Ya.; Tomm, J. W.; Kissel, H.

    2012-11-01

    The effects of uniaxial compression in [110] direction on energy-band structures, heavy and light hole mixing, optical matrix elements, and gain in laser diodes with "light hole up" configuration of valence band levels in GaAsP quantum wells with different widths and phosphorus contents are numerically calculated. The development of light and heavy hole mixing caused by symmetry lowering and converging behavior of light and heavy hole levels in such quantum wells under uniaxial compression is displayed. The light or heavy hole nature of each level is established for all considered values of uniaxial stress. The results of optical gain calculations for TM and TE polarization modes show that uniaxial compression leads to a significant increase of the TE mode and a minor decrease of the TM mode. Electroluminescence experiments were performed under uniaxial compression up to 5 kbar at 77 K on a model laser diode structure (p-AlxGa1-xAs/GaAs1-yPy/n-AlxGa1-xAs) with y = 0.16 and a quantum well width of 14 nm. They reveal a maximum blue shift of 27 meV of the electroluminescence spectra that is well described by the calculated change of the optical gap and the increase of the intensity being referred to a TE mode enhancement. Numerical calculations and electroluminescence data indicate that uniaxial compression may be used for a moderate wavelength and TM/TE intensity ratio tuning.

  16. Numerical Speed of Sound and its Application to Schemes for all Speeds

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Edwards, Jack R.

    1999-01-01

    The concept of "numerical speed of sound" is proposed in the construction of numerical flux. It is shown that this variable is responsible for the accurate resolution of' discontinuities, such as contacts and shocks. Moreover, this concept can he readily extended to deal with low speed and multiphase flows. As a results, the numerical dissipation for low speed flows is scaled with the local fluid speed, rather than the sound speed. Hence, the accuracy is enhanced the correct solution recovered, and the convergence rate improved. We also emphasize the role of mass flux and analyze the behavior of this flux. Study of mass flux is important because the numerical diffusivity introduced in it can be identified. In addition, it is the term common to all conservation equations. We show calculated results for a wide variety of flows to validate the effectiveness of using the numerical speed of sound concept in constructing the numerical flux. We especially aim at achieving these two goals: (1) improving accuracy and (2) gaining convergence rates for all speed ranges. We find that while the performance at high speed range is maintained, the flux now has the capability of performing well even with the low: speed flows. Thanks to the new numerical speed of sound, the convergence is even enhanced for the flows outside of the low speed range. To realize the usefulness of the proposed method in engineering problems, we have also performed calculations for complex 3D turbulent flows and the results are in excellent agreement with data.

  17. Numerical Investigation of Boiling

    NASA Astrophysics Data System (ADS)

    Sagan, Michael; Tanguy, Sebastien; Colin, Catherine

    2012-11-01

    In this work, boiling is numerically investigated, using two phase flow direct numerical simulation based on a level set / Ghost Fluid method. Nucleate boiling implies both thermal issue and multiphase dynamics issues at different scales and at different stages of bubble growth. As a result, the different phenomena are investigated separately, considering their nature and the scale at which they occur. First, boiling of a static bubble immersed in an overheated liquid is analysed. Numerical simulations have been performed at different Jakob numbers in the case of strong density discontinuity through the interface. The results show a good agreement on bubble radius evolution between the theoretical evolution and numerical simulation. After the validation of the code for the Scriven test case, interaction of a bubble with a wall is studied. A numerical method taking into account contact angle is evaluated by comparing simulations of the spreading of a liquid droplet impacting on a plate, with experimental data. Then the heat transfer near the contact line is investigated, and simulations of nucleate boiling are performed considering different contact angles values. Finally, the relevance of including a model to take into account the evaporation of the micro layer is discussed.

  18. A study of the effects of numerical dissipation on the calculation of supersonic separated flows

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Anderson, J. D., Jr.

    1985-01-01

    An extensive investigation of the effect of numerical dissipation on the calculation of supersonic, separated flow over a rearward-facing step is carried out. The complete two-dimensional Navier-Stokes equations are solved by means of MacCormack's standard explicit, unsplit, time-dependent, finite difference method. A fourth-order numerical dissipation term is added explicitly. The magnitude of this term is progressively varied, and its consequences on the flowfield calculations are identified and studied. For a cold-wall, heat transfer case, numerical dissipation had a major effect on the results, particularly in the separated region. However, rather dramatically for an adiabatic wall case, numerical dissipation had virtually no effect on the results. The role of grid size on both the influence of numerical dissipation, and on the overall accuracy of the separated flow solutions is discussed.

  19. Improving classification accuracy and causal knowledge for better credit decisions.

    PubMed

    Wu, Wei-Wen

    2011-08-01

    Numerous studies have contributed to efforts to boost the accuracy of the credit scoring model. Especially interesting are recent studies which have successfully developed the hybrid approach, which advances classification accuracy by combining different machine learning techniques. However, to achieve better credit decisions, it is not enough merely to increase the accuracy of the credit scoring model. It is necessary to conduct meaningful supplementary analyses in order to obtain knowledge of causal relations, particularly in terms of significant conceptual patterns or structures involving attributes used in the credit scoring model. This paper proposes a solution of integrating data preprocessing strategies and the Bayesian network classifier with the tree augmented Na"ıve Bayes search algorithm, in order to improve classification accuracy and to obtain improved knowledge of causal patterns, thus enhancing the validity of credit decisions.

  20. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  1. Recent advances to NEC (Numerical Electromagnetics Code): Applications and validation

    SciTech Connect

    Burke, G.J. )

    1989-03-03

    Capabilities of the antenna modeling code NEC are reviewed and results are presented to illustrate typical applications. Recent developments are discussed that will improve accuracy in modeling electrically small antennas, stepped-radius wires and junctions of tightly coupled wires, and also a new capability for modeling insulated wires in air or earth is described. These advances will be included in a future release of NEC, while for now the results serve to illustrate limitations of the present code. NEC results are compared with independent analytical and numerical solutions and measurements to validate the model for wires near ground and for insulated wires. 41 refs., 26 figs., 1 tab.

  2. Numerical modeling of thermal behavior of fluid conduit flow with transport delay

    SciTech Connect

    Chow, T.T.; Ip, F.; Dunn, A.; Tse, W.L.

    1996-12-31

    Fluid mass and energy flows in air-conditioning systems vary with the changing output demand. In lengthy or complex ductwork and pipework, the accuracy in simulating the dynamic network behavior is greatly affected by the accuracy in modeling the radial energy losses and the axial transport lag. Transport delay consideration is also vital in the study of heat exchanger dynamics. This paper reviews the development of transport delay models in fluid conduit flow. A new numerical model is recommended in which the thermal behavior of fluid elements can be traced per physical distance traveled in unit time step. Justifications by sensitivity and frequency response analyses were performed. The results of analytical, experimental, as well as intermodel comparisons demonstrate the promising accuracy of the numerical model introduced.

  3. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  4. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  5. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  6. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    PubMed Central

    Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  7. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    PubMed

    Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  8. Real time hybrid simulation with online model updating: An analysis of accuracy

    NASA Astrophysics Data System (ADS)

    Ou, Ge; Dyke, Shirley J.; Prakash, Arun

    2017-02-01

    In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.

  9. Astronomic Position Accuracy Capability Study.

    DTIC Science & Technology

    1979-10-01

    portion of F. E. Warren AFB, Wyoming. The three points were called THEODORE ECC , TRACY, and JIM and consisted of metal tribrachs plastered to cinder...sets were computed as a deviation from the standard. Accuracy figures were determined from these residuals. Homo - geneity of variances was tested using

  10. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  11. Inventory accuracy in 60 days!

    PubMed

    Miller, G J

    1997-08-01

    Despite great advances in manufacturing technology and management science, thousands of organizations still don't have a handle on basic inventory accuracy. Many companies don't even measure it properly, or at all, and lack corrective action programs to improve it. This article offers an approach that has proven successful a number of times, when companies were quite serious about making improvements. Not only can it be implemented, but also it can likely be implemented within 60 days per area, if properly managed. The hardest part is selling people on the need to improve and then keeping them motivated. The net cost of such a program? Probably less than nothing, since the benefits gained usually far exceed the costs. Improved inventory accuracy can aid in enhancing customer service, determining purchasing and manufacturing priorities, reducing operating costs, and increasing the accuracy of financial records. This article also addresses the gap in contemporary literature regarding accuracy program features for repetitive, JIT, cellular, and process- and project-oriented environments.

  12. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  13. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  14. Medial Patellofemoral Ligament Reconstruction Femoral Tunnel Accuracy

    PubMed Central

    Hiemstra, Laurie A.; Kerslake, Sarah; Lafave, Mark

    2017-01-01

    Background: Medial patellofemoral ligament (MPFL) reconstruction is a procedure aimed to reestablish the checkrein to lateral patellar translation in patients with symptomatic patellofemoral instability. Correct femoral tunnel position is thought to be crucial to successful MPFL reconstruction, but the accuracy of this statement in terms of patient outcomes has not been tested. Purpose: To assess the accuracy of femoral tunnel placement in an MPFL reconstruction cohort and to determine the correlation between tunnel accuracy and a validated disease-specific, patient-reported quality-of-life outcome measure. Study Design: Case series; Level of evidence, 4. Methods: Between June 2008 and February 2014, a total of 206 subjects underwent an MPFL reconstruction. Lateral radiographs were measured to determine the accuracy of the femoral tunnel by measuring the distance from the center of the femoral tunnel to the Schöttle point. Banff Patella Instability Instrument (BPII) scores were collected a mean 24 months postoperatively. Results: A total of 155 (79.5%) subjects had adequate postoperative lateral radiographs and complete BPII scores. The mean duration of follow-up (±SD) was 24.4 ± 8.2 months (range, 12-74 months). Measurement from the center of the femoral tunnel to the Schöttle point resulted in 143 (92.3%) tunnels being categorized as “good” or “ideal.” There were 8 failures in the cohort, none of which occurred in malpositioned tunnels. The mean distance from the center of the MPFL tunnel to the center of the Schöttle point was 5.9 ± 4.2 mm (range, 0.5-25.9 mm). The mean postoperative BPII score was 65.2 ± 22.5 (range, 9.2-100). Pearson r correlation demonstrated no statistically significant relationship between accuracy of femoral tunnel position and BPII score (r = –0.08; 95% CI, –0.24 to 0.08). Conclusion: There was no evidence of a correlation between the accuracy of MPFL reconstruction femoral tunnel in relation to the Schöttle point and

  15. A three-dimensional model and numerical simulation regarding thermoseed mediated magnetic induction therapy conformal hyperthermia.

    PubMed

    Wang, Heng; Wu, Jianan; Zhuo, Zihan; Tang, Jintian

    2016-04-29

    In order to ensure the safety and effectiveness of magnetic induction hyperthermia in clinical applications, numerical simulations on the temperature distributions and extent of thermal damage to the targeted regions must be conducted in the preoperative treatment planning system. In this paper, three models, including a thermoseed thermogenesis model, tissue heat transfer model, and tissue thermal damage model, were established based on the four-dimensional energy field, temperature field, and thermal damage field distributions exhibited during hyperthermia. In addition, a numerical simulation study was conducted using the Finite Volume Method (FVM), and the accuracy and reliability of the magnetic induction hyperthermia model and its numerical calculations were verified using computer simulations and experimental results. Thus, this study promoted the application of computing methods to magnetic induction therapy and conformal hyperthermia, and improved the accuracy of the temperature field and tissue thermal damage distribution predictions.

  16. The influence of various graphical and numeric trend display formats on the detection of simulated changes.

    PubMed

    Kennedy, R R; Merry, A F; Warman, G R; Webster, C S

    2009-11-01

    Integration of a large amount of information is important in anaesthesia but there is little research to guide the development of data displays. Anaesthetists from two hospitals participated in five related screen based simulation studies comparing various formats for display of historical or 'trend' data. Participants were asked to indicate when they first noticed a change in each displayed variable. Accuracy and latency (i.e. delay) in detection of changes were recorded. Latency was shorter with a graphic display of historical data than with a numeric display. Increasing number of variables or reduction of y-axis height increased the latency of detection. If the same number of data points were included, there was no difference between graphical and numerical displays of historical data. There was no difference in accuracy between graphical or numerical displays. These results suggest that the way trend data is presented can influence the speed of detection of changes.

  17. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  18. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  19. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  20. Spatial and numerical processing in children with high and low visuospatial abilities.

    PubMed

    Crollen, Virginie; Noël, Marie-Pascale

    2015-04-01

    In the literature on numerical cognition, a strong association between numbers and space has been repeatedly demonstrated. However, only a few recent studies have been devoted to examine the consequences of low visuospatial abilities on calculation processing. In this study, we wanted to investigate whether visuospatial weakness may affect pure spatial processing as well as basic numerical reasoning. To do so, the performances of children with high and low visuospatial abilities were directly compared on different spatial tasks (the line bisection and Simon tasks) and numerical tasks (the number bisection, number-to-position, and numerical comparison tasks). Children from the low visuospatial group presented the classic Simon and SNARC (spatial numerical association of response codes) effects but showed larger deviation errors as compared with the high visuospatial group. Our results, therefore, demonstrated that low visuospatial abilities did not change the nature of the mental number line but rather led to a decrease in its accuracy.

  1. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  2. Modified Numerical Simulation Model of Blood Flow in Bend

    PubMed Central

    Liu, X; Zhou, X; Hao, X; Sang, X

    2015-01-01

    ABSTRACT The numerical simulation model of blood flow in bend is studied in this paper. The curvature modification is conducted for the blood flow model in bend to obtain the modified blood flow model in bend. The modified model is verified by U tube. By comparing the simulation results with the experimental results obtained by measuring the flow data in U tube, it was found that the modified blood flow model in bend can effectively improve the prediction accuracy of blood flow data affected by the curvature effect. PMID:27398727

  3. Fast and High Accuracy Multigrid Solution of the Three Dimensional Poisson Equation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun

    1998-07-01

    We employ a fourth-order compact finite difference scheme (FOS) with the multigrid algorithm to solve the three dimensional Poisson equation. We test the influence of different orderings of the grid space and different grid-transfer operators on the convergence and efficiency of our high accuracy algorithm. Fourier smoothing analysis is conducted to show that FOS has a smaller smoothing factor than the traditional second-order central difference scheme (CDS). A new method of Fourier smoothing analysis is proposed for the partially decoupled red-black Gauss-Seidel relaxation with FOS. Numerical results are given to compare the computed accuracy and the computational efficiency of FOS with multigrid against CDS with multigrid.

  4. MLC positional accuracy evaluation through the Picket Fence test on EBT2 films and a 3D volumetric phantom.

    PubMed

    Antypas, Christos; Floros, Ioannis; Rouchota, Maritina; Armpilia, Christina; Lyra, Maria

    2015-03-08

    The accuracy of MLC positions during radiotherapy is important as even small positional deviations can translate into considerable dose delivery errors. This becomes crucial when radiosensitive organs are located near the treated volume and especially during IMRT, where dose gradients are steep. A test commonly conducted to measure the positional accuracy of the MLCs is the Picket Fence test. In this study two alterations of the Picket Fence test were performed and evaluated, the first one using radiochromic EBT2 films and the second one the Delta4PT diode array phantom and its software. Our results showed that EBT2 films provide a relatively fast, qualitative visual inspection of the significant leaf dispositions. When slight inaccuracies need to be revealed or precise numerical results for each leaf position are needed, Delta4PT provides the desired accuracy of 1 mm. In treatment modalities where a higher accuracy is required in the delivered dose distribution, such as in IMRT, precise numerical values of the measurements for the MLC positional inspection are required.

  5. A novel ZePoC encoder for sinusoidal signals with a predictable accuracy for an AC power standard

    NASA Astrophysics Data System (ADS)

    Vennemann, T.; Frye, T.; Liu, Z.; Kahmann, M.; Mathis, W.

    2015-11-01

    In this paper we present an analytical formulation of a Zero Position Coding (ZePoC) encoder for an AC power standard based on class-D topologies. For controlling a class-D power stage a binary signal with special spectral characteristics will be generated by this ZePoC encoder for sinusoidal signals. These spectral characteristics have a predictable accuracy within a separated baseband to keep the noise floor below a specified level. Simulation results will validate the accuracy of this novel ZePoC encoder. For a real-time implementation of the encoder on a DSP/FPGA hardware architecture a trade-off between accuracy and speed of the ZePoC algorithm has to be made. Therefore the numerical effects of different floating point formats will be analyzed.

  6. The numerical analysis of a turbulent compressible jet

    NASA Astrophysics Data System (ADS)

    Debonis, James Raymond

    2000-10-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Sub-grid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two and three dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and sub-grid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved sub-grid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately ½Dj. Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 Uj.

  7. Numerical tsunami modeling and the bottom relief

    NASA Astrophysics Data System (ADS)

    Kulikov, E. A.; Gusiakov, V. K.; Ivanova, A. A.; Baranov, B. V.

    2016-11-01

    The effect of the quality of bathymetric data on the accuracy of tsunami-wave field calculation is considered. A review of the history of the numerical tsunami modeling development is presented. Particular emphasis is made on the World Ocean bottom models. It is shown that the modern digital bathymetry maps, for example, GEBCO, do not adequately simulate the sea bottom in numerical models of wave propagation, leading to considerable errors in estimating the maximum tsunami run-ups on the coast.

  8. Accuracy of distance measurements in biplane angiography

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Oishi, Satoru; Koster, David; Schroth, Gerhard

    1997-05-01

    Distance measurements of the vascular system of the brain can be derived from biplanar digital subtraction angiography (2p-DSA). The measurements are used for planning of minimal invasive surgical procedures. Our 90 degree-fixed-angle G- ring angiography system has the potential of acquiring pairs of such images with high geometric accuracy. The sizes of vessels and aneurysms are estimated applying a fast and accurate extraction method in order to select an appropriate surgical strategy. Distance computation from 2p-DSA is carried out in three steps. First, the boundary of the structure to be measured is detected based on zero-crossings and closeness to user-specified end points. Subsequently, the 3D location of the center of the structure is computed from the centers of gravity of its two projections. This location is used to reverse the magnification factor caused by the cone-shaped projection of the x-rays. Since exact measurements of possibly very small structures are crucial to the usefulness in surgical planning, we identified mechanical and computational influences on the geometry which may have an impact on the measurement accuracy. A study with phantoms is presented distinguishing between the different effects and enabling the computation of an optimal overall exactness. Comparing this optimum with results of distance measurements on phantoms whose exact size and shape is known, we found, that the measurement error for structures of size of 20 mm was less than 0.05 mm on average and 0.50 mm at maximum. The maximum achievable accuracy of 0.15 mm was in most cases exceeded by less than 0.15 mm. This accuracy surpasses by far the requirements for the above mentioned surgery application. The mechanic accuracy of the fixed-angle biplanar system meets the requirements for computing a 3D reconstruction of the small vessels of the brain. It also indicates, that simple measurements will be possible on systems being less accurate.

  9. A new class of high accuracy TVD schemes for hyperbolic conservation laws. [Total Variation Diminishing

    NASA Technical Reports Server (NTRS)

    Chakravarthy, S. R.; Osher, S.

    1985-01-01

    A new family of high accuracy Total Variation Diminishing (TVD) schemes has been developed. Members of the family include the conventional second-order TVD upwind scheme, various other second-order accurate TVD schemes with lower truncation error, and even a third-order accurate TVD approximation. All the schemes are defined with a five-point grid bandwidth. In this paper, the new algorithms are described for scalar equations, systems, and arbitrary coordinates. Selected numerical results are provided to illustrate the new algorithms and their properties.

  10. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  11. Empirical Accuracies of U.S. Space Surveillance Network Reentry Predictions

    NASA Technical Reports Server (NTRS)

    Johnson, Nicholas L.

    2008-01-01

    The U.S. Space Surveillance Network (SSN) issues formal satellite reentry predictions for objects which have the potential for generating debris which could pose a hazard to people or property on Earth. These prognostications, known as Tracking and Impact Prediction (TIP) messages, are nominally distributed at daily intervals beginning four days prior to the anticipated reentry and several times during the final 24 hours in orbit. The accuracy of these messages depends on the nature of the satellite s orbit, the characteristics of the space vehicle, solar activity, and many other factors. Despite the many influences on the time and the location of reentry, a useful assessment of the accuracies of TIP messages can be derived and compared with the official accuracies included with each TIP message. This paper summarizes the results of a study of numerous uncontrolled reentries of spacecraft and rocket bodies from nearly circular orbits over a span of several years. Insights are provided into the empirical accuracies and utility of SSN TIP messages.

  12. Parameter optimization of a dual-comb ranging system by using a numerical simulation method.

    PubMed

    Wu, Guanhao; Xiong, Shilin; Ni, Kai; Zhu, Zebin; Zhou, Qian

    2015-12-14

    Dual-comb system parameters have significant impacts on the ranging accuracy. We present a theoretical model and a numerical simulation method for the parameter optimization of a dual-comb ranging system. With this method we investigate the impacts of repetition rate difference, repetition rate, and carrier-envelope-offset frequency on the ranging accuracy. Firstly, the simulation results suggest a series of discrete zones of repetition rate difference in an optimal range, which are consistent with the experimental results. Secondly, the simulation results of the repetition rate indicate that a higher repetition rate is very favorable to improve the ranging accuracy. Finally, the simulation results suggest a series of discrete optimal ranges of the carrier-envelope-offset frequency for the dual-comb system. The simulated results were verified by our experiments.

  13. Numerical Studies and Equipment Development for Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Marabuto, S. R.; Sena, J. I. V.; Afonso, D.; Martins, M. A. B. E.; Coelho, R. M.; Ferreira, J. A. F.; Valente, R. A. F.; de Sousa, R. J. Alves

    2011-05-01

    This paper summarizes the achievements obtained so far in the context of a research project carried out at the University of Aveiro, Portugal on both numerical and experimental viewpoints concerning Single Point Incremental Forming (SPIF). On the experimental side, the general guidelines on the development of a new SPIF machine are detailed. The innovation features are related to the choice of a six-degrees-of-freedom, parallel kinematics machine, with a high payload, to broad the range of materials to be tested, and allowing for a higher flexibility on tool-path generation. On the numerical side, preliminary results on simulation of SPIF processes resorting to an innovative solid-shell finite element are presented. The final target is an accurate and fast simulation of SPIF processes by means of numerical methods. Accuracy is obtained through the use of a finite element accounting for three-dimensional stress and strain fields. The developed formulation allows for an unlimited number of integration points through its thickness direction, which promotes accuracy without loss of CPU efficiency. Preliminary results and designs are shown and discussions over the obtained solutions are provided in order to further improve the research framework.

  14. COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE

    SciTech Connect

    Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov

    2011-08-10

    Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the

  15. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  16. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  17. Accuracy of implant impression techniques.

    PubMed

    Assif, D; Marshak, B; Schmidt, A

    1996-01-01

    Three impression techniques were assessed for accuracy in a laboratory cast that simulated clinical practice. The first technique used autopolymerizing acrylic resin to splint the transfer copings. The second involved splinting of the transfer copings directly to an acrylic resin custom tray. In the third, only impression material was used to orient the transfer copings. The accuracy of stone casts with implant analogs was measured against a master framework. The fit of the framework on the casts was tested using strain gauges. The technique using acrylic resin to splint transfer copings in the impression material was significantly more accurate than the two other techniques. Stresses observed in the framework are described and discussed with suggestions to improve clinical and laboratory techniques.

  18. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  19. Enhancing and evaluating diagnostic accuracy.

    PubMed

    Swets, J A; Getty, D J; Pickett, R M; D'Orsi, C J; Seltzer, S E; McNeil, B J

    1991-01-01

    Techniques that may enhance diagnostic accuracy in clinical settings were tested in the context of mammography. Statistical information about the relevant features among those visible in a mammogram and about their relative importances in the diagnosis of breast cancer was the basis of two decision aids for radiologists: a checklist that guides the radiologist in assigning a scale value to each significant feature of the images of a particular case, and a computer program that merges those scale values optimally to estimate a probability of malignancy. A test set of approximately 150 proven cases (including normals and benign and malignant lesions) was interpreted by six radiologists, first in their usual manner and later with the decision aids. The enhancing effect of these feature-analytic techniques was analyzed across subsets of cases that were restricted progressively to more and more difficult cases, where difficulty was defined in terms of the radiologists' judgements in the standard reading condition. Accuracy in both standard and enhanced conditions decreased regularly and substantially as case difficulty increased, but differentially, such that the enhancement effect grew regularly and substantially. For the most difficult case sets, the observed increases in accuracy translated into an increase of about 0.15 in sensitivity (true-positive proportion) for a selected specificity (true-negative proportion) of 0.85 or a similar increase in specificity for a selected sensitivity of 0.85. That measured accuracy can depend on case-set difficulty to different degrees for two diagnostic approaches has general implications for evaluation in clinical medicine. Comparative, as well as absolute, assessments of diagnostic performances--for example, of alternative imaging techniques--may be distorted by inadequate treatments of this experimental variable. Subset analysis, as defined and illustrated here, can be useful in alleviating the problem.

  20. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  1. Improving the accuracy of the discrete gradient method in the one-dimensional case.

    PubMed

    Cieśliński, Jan L; Ratkiewicz, Bogusław

    2010-01-01

    We present two numerical schemes of high accuracy for one-dimensional dynamical systems. They are modifications of the discrete gradient method and keep its advantages, including stability and conservation of the energy integral. However, their accuracy is higher by several orders of magnitude.

  2. Numerical Studies of Sub-Grid Scale Processes with Special Emphasis on Discrete Sources.

    NASA Astrophysics Data System (ADS)

    Kasibhatla, Prasad Shanyi

    1988-12-01

    State-of-the-art numerical software packages used to solve differential equations utilize adaptive grid techniques. In these methods, the mesh is refined based on a posteriori error estimates. Due to the large computational time requirements for real time air pollution simulations, it is impractical to use adaptive grids. The aim of this study is to identify parameters which affect the accuracy of numerical solutions of the transport equations involving discrete sources. Based on this a grid can be chosen a priori and used for the entire simulation. Numerical results are presented for a one-dimensional problem in which the discontinuity is in the form of a Dirac delta initial condition. This initial condition is approximated using two approaches, namely, T_2 mollification and L _2 projection. These methods are compared in terms of solution accuracy and rate of convergence. Various time-integration schemes are also tested concurrently. Extensions to two- and three-dimensional problems, with sources occurring as forcing functions, are also presented. A time-split scheme and a finite element scheme are used to solve these problems. Numerical results for inert plumes emanating from discrete sources show that the ratio of the advection time scale to the turbulent diffusion time scale plays a key role in determining solution accuracy. In addition, comparisons between the volume-averaged representation of a point source and the use of an irregular grid for point source representation demonstrate that, near the source, improved results can be obtained by placing a node at the source location. Numerical results also reveal that the cross-derivative term, which appears in the governing differential equation when the wind velocity vector is not aligned along grid lines, can be ignored without significant loss of accuracy.

  3. On the accuracy of RANS simulations with DNS data

    NASA Astrophysics Data System (ADS)

    Poroseva, Svetlana V.; Colmenares F., Juan D.; Murman, Scott M.

    2016-11-01

    Simulation results conducted for incompressible planar wall-bounded turbulent flows with the Reynolds-Averaged Navier-Stokes (RANS) equations with no modeling involved are presented. Instead, all terms but the molecular diffusion are represented by the data from direct numerical simulation (DNS). In simulations, the transport equations for velocity moments through the second order (and the fourth order where the data are available) are solved in a zero-pressure gradient boundary layer over a flat plate and in a fully developed channel flow in a wide range of Reynolds numbers using DNS data from Sillero et al., Lee and Moser, and Jeyapaul et al. The results obtained demonstrate that DNS data are the significant and dominant source of uncertainty in such simulations (hereafter, RANS-DNS simulations). Effects of the Reynolds number, flow geometry, and the velocity moment order as well as an uncertainty quantification technique used to collect the DNS data on the results of RANS-DNS simulations are analyzed. New criteria for uncertainty quantification in statistical data collected from DNS are proposed to guarantee the data accuracy sufficient for their use in RANS equations and for the turbulence model validation.

  4. Accuracy of Surgery Clerkship Performance Raters.

    ERIC Educational Resources Information Center

    Littlefield, John H.; And Others

    1991-01-01

    Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)

  5. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  6. Accuracy of forecasts in strategic intelligence.

    PubMed

    Mandel, David R; Barnes, Alan

    2014-07-29

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4-0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement.

  7. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  8. GPU accelerated numerical simulations of viscoelastic phase separation model.

    PubMed

    Yang, Keda; Su, Jiaye; Guo, Hongxia

    2012-07-05

    We introduce a complete implementation of viscoelastic model for numerical simulations of the phase separation kinetics in dynamic asymmetry systems such as polymer blends and polymer solutions on a graphics processing unit (GPU) by CUDA language and discuss algorithms and optimizations in details. From studies of a polymer solution, we show that the GPU-based implementation can predict correctly the accepted results and provide about 190 times speedup over a single central processing unit (CPU). Further accuracy analysis demonstrates that both the single and the double precision calculations on the GPU are sufficient to produce high-quality results in numerical simulations of viscoelastic model. Therefore, the GPU-based viscoelastic model is very promising for studying many phase separation processes of experimental and theoretical interests that often take place on the large length and time scales and are not easily addressed by a conventional implementation running on a single CPU.

  9. Numerical Relativistic Quantum Optics

    DTIC Science & Technology

    2013-11-08

    Introduction 1 II. Relativistic Wave Equations 2 III. Stationary States 4 A. Analytical Solutions for Coulomb Potentials 4 B. Numerical Solutions...C. Relativistic Ionization Example 15 V. Computational Performance 18 VI. Conclusions 21 VII. Acknowledgements 22 References 23 1 I. INTRODUCTION ...peculiar result that B0 = 1 TG is a weak field. At present, such fields are observed only in connection with astrophysical phenomena [14]. The highest

  10. Numerical simulation of Bootstrap Current

    SciTech Connect

    Wu, Yanlin; White, R.B.

    1993-05-01

    The neoclassical theory of Bootstrap Current in toroidal systems is calculated in magnetic flux coordinates and confirmed by numerical simulation. The effects of magnetic ripple, loop voltage, and magnetic and electrostatic perturbations on bootstrap current for the cases of zero and finite plasma pressure are studied. The numerical results are in reasonable agreement with analytical estimates.

  11. Numerical simulation of dusty plasmas

    SciTech Connect

    Winske, D.

    1995-09-01

    The numerical simulation of physical processes in dusty plasmas is reviewed, with emphasis on recent results and unresolved issues. Three areas of research are discussed: grain charging, weak dust-plasma interactions, and strong dust-plasma interactions. For each area, we review the basic concepts that are tested by simulations, present some appropriate examples, and examine numerical issues associated with extending present work.

  12. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  13. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  14. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  15. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-03-06

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image.

  16. Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease

    PubMed Central

    das Virgens, Cláudio Marcelo Bittencourt; Lemos Jr, Laudenor; Noya-Rabelo, Márcia; Carvalhal, Manuela Campelo; Cerqueira Junior, Antônio Maurício dos Santos; Lopes, Fernanda Oliveira de Andrade; de Sá, Nicole Cruz; Suerdieck, Jéssica Gonzalez; de Souza, Thiago Menezes Barbosa; Correia, Vitor Calixto de Almeida; Sodré, Gabriella Sant'Ana; da Silva, André Barcelos; Alexandre, Felipe Kalil Beirão; Ferreira, Felipe Rodrigues Marques; Correia, Luís Cláudio Lemos

    2017-01-01

    AIM To test accuracy and reproducibility of gestalt to predict obstructive coronary artery disease (CAD) in patients with acute chest pain. METHODS We studied individuals who were consecutively admitted to our Chest Pain Unit. At admission, investigators performed a standardized interview and recorded 14 chest pain features. Based on these features, a cardiologist who was blind to other clinical characteristics made unstructured judgment of CAD probability, both numerically and categorically. As the reference standard for testing the accuracy of gestalt, angiography was required to rule-in CAD, while either angiography or non-invasive test could be used to rule-out. In order to assess reproducibility, a second cardiologist did the same procedure. RESULTS In a sample of 330 patients, the prevalence of obstructive CAD was 48%. Gestalt’s numerical probability was associated with CAD, but the area under the curve of 0.61 (95%CI: 0.55-0.67) indicated low level of accuracy. Accordingly, categorical definition of typical chest pain had a sensitivity of 48% (95%CI: 40%-55%) and specificity of 66% (95%CI: 59%-73%), yielding a negligible positive likelihood ratio of 1.4 (95%CI: 0.65-2.0) and negative likelihood ratio of 0.79 (95%CI: 0.62-1.02). Agreement between the two cardiologists was poor in the numerical classification (95% limits of agreement = -71% to 51%) and categorical definition of typical pain (Kappa = 0.29; 95%CI: 0.21-0.37). CONCLUSION Clinical judgment based on a combination of chest pain features is neither accurate nor reproducible in predicting obstructive CAD in the acute setting.

  17. Numerical computation of gravitational field for general axisymmetric objects

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (i) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (ii) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (i) finite uniform objects covering rhombic spindles and circular toroids, (ii) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (iii) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  18. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  19. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  20. Projected discrete ordinates methods for numerical transport problems

    SciTech Connect

    Larsen, E.W.

    1985-01-01

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  1. Calculation of free-fall trajectories using numerical optimization methods.

    NASA Technical Reports Server (NTRS)

    Hull, D. G.; Fowler, W. T.; Gottlieb, R. G.

    1972-01-01

    An important problem in space flight is the calculation of trajectories for nonthrusting vehicles between fixed points in a given time. A new procedure based on Hamilton's principle for solving such two-point boundary-value problems is presented. It employs numerical optimization methods to perform the extremization required by Hamilton's principle. This procedure is applied to the calculation of an Earth-Moon trajectory. The results show that the initial guesses required to obtain an iteration procedure which converges are not critical and that convergence can be obtained to any predetermined degree of accuracy.

  2. Numerical simulation in alternating current field measurement inducer design

    NASA Astrophysics Data System (ADS)

    Zhou, Zhixiong; Zheng, Wenpei

    2017-02-01

    The present work develops a numerical simulation model to evaluate the magnetic field perturbation of a twin coil alternating current field measurement (ACFM) inducer passing above a surface-breaking crack for the purpose of enhanced crack detection. Model predictions show good agreement with experimental data, verifying the accuracy of the model. The model includes the influence of various parameters, such as core dimensions and core positions on the perturbed magnetic field above a crack. Optimized design parameters for a twin coil inducer are given according to the analysis results, which provide for a greatly improved detection effect.

  3. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    PubMed

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  4. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  5. Modeling individual differences in response time and accuracy in numeracy.

    PubMed

    Ratcliff, Roger; Thompson, Clarissa A; McKoon, Gail

    2015-04-01

    In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy

  6. Improving metacomprehension accuracy in an undergraduate course context.

    PubMed

    Wiley, Jennifer; Griffin, Thomas D; Jaeger, Allison J; Jarosz, Andrew F; Cushen, Patrick J; Thiede, Keith W

    2016-12-01

    Students tend to have poor metacomprehension when learning from text, meaning they are not able to distinguish between what they have understood well and what they have not. Although there are a good number of studies that have explored comprehension monitoring accuracy in laboratory experiments, fewer studies have explored this in authentic course contexts. This study investigated the effect of an instructional condition that encouraged comprehension-test-expectancy and self-explanation during study on metacomprehension accuracy in the context of an undergraduate course in research methods. Results indicated that when students received this instructional condition, relative metacomprehension accuracy was better than in a comparison condition. In addition, differences were also seen in absolute metacomprehension accuracy measures, strategic study behaviors, and learning outcomes. The results of the current study demonstrate that a condition that has improved relative metacomprehension accuracy in laboratory contexts may have value in real classroom contexts as well. (PsycINFO Database Record

  7. Accuracy of Stokes integration for geoid computation

    NASA Astrophysics Data System (ADS)

    Ismail, Zahra; Jamet, Olivier; Altamimi, Zuheir

    2014-05-01

    Geoid determination by remove-compute-restore (RCR) technique involves the application of Stokes's integral on reduced gravity anomalies. Reduced gravity anomalies are obtained through interpolation after removing low degree gravity signal from space spherical harmonic model and high frequency from topographical effects and cover a spectre ranging from degree 150-200. Stokes's integral is truncated to a limited region around the computation point producing an error that will be reducing by a modification of Stokes's kernel. We study Stokes integral accuracy on synthetic signal of various frequency ranges, produced with EGM2008 spherical harmonic coefficients up to degree 2000. We analyse the integration error according to the frequency range of signal, the resolution of gravity anomaly grid and the radius of Stokes integration. The study shows that the behaviour of the relative errors is frequency independent. The standard Stokes kernel is though insufficient to produce 1cm geoid accuracy without a removal of the major part of the gravity signal up to degree 600. The Integration over an area of radius greater than 3 degree does not improve accuracy improvement. The results are compared to a similar experiment using the modified Stokes kernel formula (Ellmann2004, Sjöberg2003). References: Ellmann, A. (2004) The geoid for the Baltic countries determined by least-squares modification of Stokes formula. Sjöberg, LE (2003). A general model of modifying Stokes formula and its least-squares solution Journal of Geodesy, 77. 459-464.

  8. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  9. Influence of Hydraulic Design on Stability and on Pressure Pulsations in Francis Turbines at Overload, Part Load and Deep Part Load based on Numerical Simulations and Experimental Model Test Results

    NASA Astrophysics Data System (ADS)

    Magnoli, M. V.; Maiwald, M.

    2014-03-01

    Francis turbines have been running more and more frequently in part load conditions, in order to satisfy the new market requirements for more dynamic and flexible energy generation, ancillary services and grid regulation. The turbines should be able to be operated for longer durations with flows below the optimum point, going from part load to deep part load and even speed-no-load. These operating conditions are characterised by important unsteady flow phenomena taking place at the draft tube cone and in the runner channels, in the respective cases of part load and deep part load. The current expectations are that new Francis turbines present appropriate hydraulic stability and moderate pressure pulsations at overload, part load, deep part load and speed-no-load with high efficiency levels at normal operating range. This study presents series of investigations performed by Voith Hydro with the objective to improve the hydraulic stability of Francis turbines at overload, part load and deep part load, reduce pressure pulsations and enlarge the know-how about the transient fluid flow through the turbine at these challenging conditions. Model test measurements showed that distinct runner designs were able to influence the pressure pulsation level in the machine. Extensive experimental investigations focused on the runner deflector geometry, on runner features and how they could reduce the pressure oscillation level. The impact of design variants and machine configurations on the vortex rope at the draft tube cone at overload and part load and on the runner channel vortex at deep part load were experimentally observed and evaluated based on the measured pressure pulsation amplitudes. Numerical investigations were employed for improving the understanding of such dynamic fluid flow effects. As example for the design and experimental investigations, model test observations and pressure pulsation curves for Francis machines in mid specific speed range, around nqopt = 50 min

  10. An analysis of the accuracy of a parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Baram, Y.

    1974-01-01

    The numerical operations involved in a currently used optimization technique are discussed and analyzed with special attention to the numerical accuracy. Alternative methods for deriving linear system transfer functions, finding the relationships between the transfer function coefficients and the design parameters, and solving a matrix equation are presented for more accurate and cost effective solutions.

  11. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  12. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the

  13. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  14. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  15. Direct numerical simulation of incompressible axisymmetric flows

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick

    1994-01-01

    In the present work, we propose to conduct direct numerical simulations (DNS) of incompressible turbulent axisymmetric jets and wakes. The objectives of the study are to understand the fundamental behavior of axisymmetric jets and wakes, which are perhaps the most technologically relevant free shear flows (e.g. combuster injectors, propulsion jet). Among the data to be generated are various statistical quantities of importance in turbulence modeling, like the mean velocity, turbulent stresses, and all the terms in the Reynolds-stress balance equations. In addition, we will be interested in the evolution of large-scale structures that are common in free shear flow. The axisymmetric jet or wake is also a good problem in which to try the newly developed b-spline numerical method. Using b-splines as interpolating functions in the non-periodic direction offers many advantages. B-splines have local support, which leads to sparse matrices that can be efficiently stored and solved. Also, they offer spectral-like accuracy that are C(exp O-1) continuous, where O is the order of the spline used; this means that derivatives of the velocity such as the vorticity are smoothly and accurately represented. For purposes of validation against existing results, the present code will also be able to simulate internal flows (ones that require a no-slip boundary condition). Implementation of no-slip boundary condition is trivial in the context of the b-splines.

  16. A benchmark study of numerical schemes for one-dimensional arterial blood flow modelling.

    PubMed

    Boileau, Etienne; Nithiarasu, Perumal; Blanco, Pablo J; Müller, Lucas O; Fossan, Fredrik Eikeland; Hellevik, Leif Rune; Donders, Wouter P; Huberts, Wouter; Willemet, Marie; Alastruey, Jordi

    2015-10-01

    Haemodynamical simulations using one-dimensional (1D) computational models exhibit many of the features of the systemic circulation under normal and diseased conditions. Recent interest in verifying 1D numerical schemes has led to the development of alternative experimental setups and the use of three-dimensional numerical models to acquire data not easily measured in vivo. In most studies to date, only one particular 1D scheme is tested. In this paper, we present a systematic comparison of six commonly used numerical schemes for 1D blood flow modelling: discontinuous Galerkin, locally conservative Galerkin, Galerkin least-squares finite element method, finite volume method, finite difference MacCormack method and a simplified trapezium rule method. Comparisons are made in a series of six benchmark test cases with an increasing degree of complexity. The accuracy of the numerical schemes is assessed by comparison with theoretical results, three-dimensional numerical data in compatible domains with distensible walls or experimental data in a network of silicone tubes. Results show a good agreement among all numerical schemes and their ability to capture the main features of pressure, flow and area waveforms in large arteries. All the information used in this study, including the input data for all benchmark cases, experimental data where available and numerical solutions for each scheme, is made publicly available online, providing a comprehensive reference data set to support the development of 1D models and numerical schemes.

  17. Accuracy of the correlation method of the thermal neutron absorption cross-section determination for rocks

    NASA Astrophysics Data System (ADS)

    Krynicka, Ewa

    1995-08-01

    The influence of various random errors on the accuracy of thermal neutron absorption cross-sections determined by a correlation method is discussed. It is considered either as an absolute accuracy, when all experimental errors arc taken into account, or as an experimental assay accuracy, when the reference moderator parameters are assumed as the invariant data fixed for all experiments. The estimated accuracy is compared with the accuracy of results obtained for the same rock sample by Czubek's measurement method.

  18. Deterministic numerical solutions of the Boltzmann equation using the fast spectral method

    NASA Astrophysics Data System (ADS)

    Wu, Lei; White, Craig; Scanlon, Thomas J.; Reese, Jason M.; Zhang, Yonghao

    2013-10-01

    The Boltzmann equation describes the dynamics of rarefied gas flows, but the multidimensional nature of its collision operator poses a real challenge for its numerical solution. In this paper, the fast spectral method [36], originally developed by Mouhot and Pareschi for the numerical approximation of the collision operator, is extended to deal with other collision kernels, such as those corresponding to the soft, Lennard-Jones, and rigid attracting potentials. The accuracy of the fast spectral method is checked by comparing our numerical solutions of the space-homogeneous Boltzmann equation with the exact Bobylev-Krook-Wu solutions for a gas of Maxwell molecules. It is found that the accuracy is improved by replacing the trapezoidal rule with Gauss-Legendre quadrature in the calculation of the kernel mode, and the conservation of momentum and energy are ensured by the Lagrangian multiplier method without loss of spectral accuracy. The relax-to-equilibrium processes of different collision kernels with the same value of shear viscosity are then compared; the numerical results indicate that different forms of the collision kernels can be used as long as the shear viscosity (not only the value, but also its temperature dependence) is recovered. An iteration scheme is employed to obtain stationary solutions of the space-inhomogeneous Boltzmann equation, where the numerical errors decay exponentially. Four classical benchmarking problems are investigated: the normal shock wave, and the planar Fourier/Couette/force-driven Poiseuille flows. For normal shock waves, our numerical results are compared with a finite difference solution of the Boltzmann equation for hard sphere molecules, experimental data, and molecular dynamics simulation of argon using the realistic Lennard-Jones potential. For planar Fourier/Couette/force-driven Poiseuille flows, our results are compared with the direct simulation Monte Carlo method. Excellent agreements are observed in all test cases

  19. Convergence and accuracy of kernel-based continuum surface tension models

    SciTech Connect

    Williams, M.W.; Kothe, D.B.; Puckett, E.G.

    1998-12-01

    Numerical models for flows of immiscible fluids bounded by topologically complex interfaces possessing surface tension inevitably start with an Eulerian formulation. Here the interface is represented as a color function that abruptly varies from one constant value to another through the interface. This transition region, where the color function varies, is a thin O(h) band along the interface where surface tension forces are applied in continuum surface tension models. Although these models have been widely used since the introduction of the popular CSF method [BKZ92], properties such as absolute accuracy and uniform convergence are often not exhibited in interfacial flow simulations. These properties are necessary if surface tension-driven flows are to be reliably modeled, especially in three dimensions. Accuracy and convergence remain elusive because of difficulties in estimating first and second order spatial derivatives of color functions with abrupt transition regions. These derivatives are needed to approximate interface topology such as the unit normal and mean curvature. Modeling challenges are also presented when formulating the actual surface tension force and its local variation using numerical delta functions. In the following they introduce and incorporate kernels and convolution theory into continuum surface tension models. Here they convolve the discontinuous color function into a mollified function that can support accurate first and second order spatial derivatives. Design requirements for the convolution kernel and a new hybrid mix of convolution and discretization are discussed. The resulting improved estimates for interface topology, numerical delta functions, and surface force distribution are evidenced in an equilibrium static drop simulation where numerically-induced artificial parasitic currents are greatly mitigated.

  20. Accuracy of the post-Newtonian approximation for extreme mass ratio inspirals from a black-hole perturbation approach

    NASA Astrophysics Data System (ADS)

    Sago, Norichika; Fujita, Ryuichi; Nakano, Hiroyuki

    2016-05-01

    We revisit the accuracy of the post-Newtonian (PN) approximation and its region of validity for quasicircular orbits of a point particle in the Kerr spacetime, by using an analytically known highest post-Newtonian order gravitational energy flux and accurate numerical results in the black hole perturbation approach. It is found that regions of validity become larger for higher PN order results although there are several local maximums in regions of validity for relatively low-PN order results. This might imply that higher PN order calculations are also encouraged for comparable-mass binaries.

  1. Numerical solution of the Rosenau-KdV-RLW equation by using RBFs collocation method

    NASA Astrophysics Data System (ADS)

    Korkmaz, Bahar; Dereli, Yilmaz

    2016-04-01

    In this study, a meshfree method based on the collocation with radial basis functions (RBFs) is proposed to solve numerically an initial-boundary value problem of Rosenau-KdV-regularized long-wave (RLW) equation. Numerical values of invariants of the motion are computed to examine the fundamental conservative properties of the equation. Computational experiments for the simulation of solitary waves examine the accuracy of the scheme in terms of error norms L2 and L∞. Linear stability analysis is investigated to determine whether the present method is stable or unstable. The scheme gives unconditionally stable, and second-order convergent. The obtained results are compared with analytical solution and some other earlier works in the literature. The presented results indicate the accuracy and efficiency of the method.

  2. A Comparison of Metamodeling Techniques via Numerical Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2016-01-01

    This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.

  3. The numerical calculation of laminar boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.; Steger, J. L.

    1974-01-01

    Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.

  4. On Some Numerical Dissipation Schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.; Turkel, E.

    1998-01-01

    Several schemes for introducing an artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar dissipation and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical solutions are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme. The coarse-grid accuracy for the original CUSP scheme is improved by modifying the limiter function used with the scheme, giving comparable accuracy to that obtained with the MATD scheme. The modifications reduce the background dissipation and provide control over the regions where the scheme can become first order.

  5. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  6. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  7. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  8. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  9. Camera Calibration Accuracy at Different Uav Flying Heights

    NASA Astrophysics Data System (ADS)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  10. Modeling versus accuracy in EEG and MEG data

    SciTech Connect

    Mosher, J.C.; Huang, M.; Leahy, R.M.; Spencer, M.E.

    1997-07-30

    The widespread availability of high-resolution anatomical information has placed a greater emphasis on accurate electroencephalography and magnetoencephalography (collectively, E/MEG) modeling. A more accurate representation of the cortex, inner skull surface, outer skull surface, and scalp should lead to a more accurate forward model and hence improve inverse modeling efforts. The authors examine a few topics in this paper that highlight some of the problems of forward modeling, then discuss the impacts these results have on the inverse problem. The authors begin by assuming a perfect head model, that of the sphere, then show the lower bounds on localization accuracy of dipoles within this perfect forward model. For more realistic anatomy, the boundary element method (BEM) is a common numerical technique for solving the boundary integral equations. For a three-layer BEM, the computational requirements can be too intensive for many inverse techniques, so they examine a few simplifications. They quantify errors in generating this forward model by defining a regularized percentage error metric. The authors then apply this metric to a single layer boundary element solution, a multiple sphere approach, and the common single sphere model. They conclude with an MEG localization demonstration on a novel experimental human phantom, using both BEM and multiple spheres.

  11. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  12. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  13. A stable numerical solution method in-plane loading of nonlinear viscoelastic laminated orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1989-01-01

    In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.

  14. Adaptive numerical competency in a food-hoarding songbird.

    PubMed

    Hunt, Simon; Low, Jason; Burns, K C

    2008-10-22

    Most animals can distinguish between small quantities (less than four) innately. Many animals can also distinguish between larger quantities after extensive training. However, the adaptive significance of numerical discriminations in wild animals is almost completely unknown. We conducted a series of experiments to test whether a food-hoarding songbird, the New Zealand robin Petroica australis, uses numerical judgements when retrieving and pilfering cached food. Different numbers of mealworms were presented sequentially to wild birds in a pair of artificial cache sites, which were then obscured from view. Robins frequently chose the site containing more prey, and the accuracy of their number discriminations declined linearly with the total number of prey concealed, rising above-chance expectations in trials containing up to 12 prey items. A series of complementary experiments showed that these results could not be explained by time, volume, orientation, order or sensory confounds. Lastly, a violation of expectancy experiment, in which birds were allowed to retrieve a fraction of the prey they were originally offered, showed that birds searched for longer when they expected to retrieve more prey. Overall results indicate that New Zealand robins use a sophisticated numerical sense to retrieve and pilfer stored food, thus providing a critical link in understanding the evolution of numerical competency.

  15. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  16. Fast algorithms for numerical, conservative, and entropy approximations of the Fokker-Planck-Landau equation

    SciTech Connect

    Buet, C.; Cordier; Degond, P.; Lemou, M.

    1997-05-15

    We present fast numerical algorithms to solve the nonlinear Fokker-Planck-Landau equation in 3D velocity space. The discretization of the collision operator preserves the properties required by the physical nature of the Fokker-Planck-Landau equation, such as the conservation of mass, momentum, and energy, the decay of the entropy, and the fact that the steady states are Maxwellians. At the end of this paper, we give numerical results illustrating the efficiency of these fast algorithms in terms of accuracy and CPU time. 20 refs., 7 figs.

  17. Numerical simulations with a First order BSSN formulation of Einstein's field equations

    NASA Astrophysics Data System (ADS)

    Brown, David; Diener, Peter; Field, Scott; Hesthaven, Jan; Herrmann, Frank; Mroue, Abdul; Sarbach, Olivier; Schnetter, Erik; Tiglio, Manuel; Wagman, Michael

    2012-03-01

    We present a new fully first order strongly hyperbolic representation of the BSSN formulation of Einstein's equations with optional constraint damping terms. In particular, we describe the characteristic fields of the system, discuss its hyperbolicity properties, and present two numerical implementations and simulations: one using finite differences, adaptive mesh refinement and in particular binary black holes, and another one using the discontinuous Galerkin method in spherical symmetry. These results constitute a first step in an effort to combine the robustness of BSSN evolutions with very high accuracy numerical techniques, such as spectral collocation multi-domain or discontinuous Galerkin methods.

  18. Numerical simulations with a first-order BSSN formulation of Einstein's field equations

    NASA Astrophysics Data System (ADS)

    Brown, J. David; Diener, Peter; Field, Scott E.; Hesthaven, Jan S.; Herrmann, Frank; Mroué, Abdul H.; Sarbach, Olivier; Schnetter, Erik; Tiglio, Manuel; Wagman, Michael

    2012-04-01

    We present a new fully first-order strongly hyperbolic representation of the Baumgarte-Shapiro-Shibata-Nakamura formulation of Einstein’s equations with optional constraint damping terms. We describe the characteristic fields of the system, discuss its hyperbolicity properties, and present two numerical implementations and simulations: one using finite differences, adaptive mesh refinement, and, in particular, binary black holes, and another one using the discontinuous Galerkin method in spherical symmetry. The results of this paper constitute a first step in an effort to combine the robustness of Baumgarte-Shapiro-Shibata-Nakamura evolutions with very high accuracy numerical techniques, such as spectral collocation multidomain or discontinuous Galerkin methods.

  19. Polariton condensation threshold investigation through the numerical resolution of the generalized Gross-Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Gargoubi, Hamis; Guillet, Thierry; Jaziri, Sihem; Balti, Jalloul; Guizal, Brahim

    2016-10-01

    We present a numerical approach for the solution of the dissipative Gross-Pitaevskii equation coupled to the reservoir equation governing the exciton-polaritons Bose-Einstein condensation. It is based on the finite difference method applied to space variables and on the fourth order Range-Kutta algorithm applied to the time variable. Numerical tests illustrate the stability and accuracy of the proposed scheme. Then results on the behavior of the condensate under large Gaussian pumping and around the threshold are presented. We determine the threshold through the particular behavior of the self-energy and characterize it by tracking the establishment time of the steady state.

  20. Polariton condensation threshold investigation through the numerical resolution of the generalized Gross-Pitaevskii equation.

    PubMed

    Gargoubi, Hamis; Guillet, Thierry; Jaziri, Sihem; Balti, Jalloul; Guizal, Brahim

    2016-10-01

    We present a numerical approach for the solution of the dissipative Gross-Pitaevskii equation coupled to the reservoir equation governing the exciton-polaritons Bose-Einstein condensation. It is based on the finite difference method applied to space variables and on the fourth order Range-Kutta algorithm applied to the time variable. Numerical tests illustrate the stability and accuracy of the proposed scheme. Then results on the behavior of the condensate under large Gaussian pumping and around the threshold are presented. We determine the threshold through the particular behavior of the self-energy and characterize it by tracking the establishment time of the steady state.