Sample records for precise numerical simulations

  1. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  2. Numerical simulation of deformation and figure quality of precise mirror

    NASA Astrophysics Data System (ADS)

    Vit, Tomáš; Melich, Radek; Sandri, Paolo

    2015-01-01

    The presented paper shows results and a comparison of FEM numerical simulations and optical tests of the assembly of a precise Zerodur mirror with a mounting structure for space applications. It also shows how the curing of adhesive film can impact the optical surface, especially as regards deformations. Finally, the paper shows the results of the figure quality analysis, which are based on data from FEM simulation of optical surface deformations.

  3. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  4. Numerical Simulation Analysis of High-precision Dispensing Needles for Solid-liquid Two-phase Grinding

    NASA Astrophysics Data System (ADS)

    Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming

    2018-03-01

    In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.

  5. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  6. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  7. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  8. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  9. Numerical simulations of the charged-particle flow dynamics for sources with a curved emission surface

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.

    2016-12-01

    The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.

  10. Subpixel edge estimation with lens aberrations compensation based on the iterative image approximation for high-precision thermal expansion measurements of solids

    NASA Astrophysics Data System (ADS)

    Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.

    2017-06-01

    A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.

  11. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.

    2016-01-01

    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  12. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  13. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  14. Simulation of Thermal Behavior in High-Precision Measurement Instruments

    NASA Astrophysics Data System (ADS)

    Weis, Hanna Sophie; Augustin, Silke

    2008-06-01

    In this paper, a way to modularize complex finite-element models is described. The modularization is done with temperature fields that appear in high-precision measurement instruments. There, the temperature negatively impacts the achievable uncertainty of measurement. To correct for this uncertainty, the temperature must be known at every point. This cannot be achieved just by measuring temperatures at specific locations. Therefore, a numerical treatment is necessary. As the system of interest is very complex, modularization is unavoidable to obtain good numerical results.

  15. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE PAGES

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...

    2016-01-01

    We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  16. Targeted numerical simulations of binary black holes for GW170104

    NASA Astrophysics Data System (ADS)

    Healy, J.; Lange, J.; O'Shaughnessy, R.; Lousto, C. O.; Campanelli, M.; Williamson, A. R.; Zlochower, Y.; Calderón Bustillo, J.; Clark, J. A.; Evans, C.; Ferguson, D.; Ghonge, S.; Jani, K.; Khamesra, B.; Laguna, P.; Shoemaker, D. M.; Boyle, M.; García, A.; Hemberger, D. A.; Kidder, L. E.; Kumar, P.; Lovelace, G.; Pfeiffer, H. P.; Scheel, M. A.; Teukolsky, S. A.

    2018-03-01

    In response to LIGO's observation of GW170104, we performed a series of full numerical simulations of binary black holes, each designed to replicate likely realizations of its dynamics and radiation. These simulations have been performed at multiple resolutions and with two independent techniques to solve Einstein's equations. For the nonprecessing and precessing simulations, we demonstrate the two techniques agree mode by mode, at a precision substantially in excess of statistical uncertainties in current LIGO's observations. Conversely, we demonstrate our full numerical solutions contain information which is not accurately captured with the approximate phenomenological models commonly used to infer compact binary parameters. To quantify the impact of these differences on parameter inference for GW170104 specifically, we compare the predictions of our simulations and these approximate models to LIGO's observations of GW170104.

  17. Testing and Validating Gadget2 for GPUs

    NASA Astrophysics Data System (ADS)

    Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.

    2013-01-01

    We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.

  18. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  19. Simple Numerical Modelling for Gasdynamic Design of Wave Rotors

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Nagashima, Toshio

    The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.

  20. The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui

    1996-02-01

    Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.

  1. Numerical simulation study on rolling-chemical milling process of aluminum-lithium alloy skin panel

    NASA Astrophysics Data System (ADS)

    Huang, Z. B.; Sun, Z. G.; Sun, X. F.; Li, X. Q.

    2017-09-01

    Single curvature parts such as aircraft fuselage skin panels are usually manufactured by rolling-chemical milling process, which is usually faced with the problem of geometric accuracy caused by springback. In most cases, the methods of manual adjustment and multiple roll bending are used to control or eliminate the springback. However, these methods can cause the increase of product cost and cycle, and lead to material performance degradation. Therefore, it is of significance to precisely control the springback of rolling-chemical milling process. In this paper, using the method of experiment and numerical simulation on rolling-chemical milling process, the simulation model for rolling-chemical milling process of 2060-T8 aluminum-lithium alloy skin was established and testified by the comparison between numerical simulation and experiment results for the validity. Then, based on the numerical simulation model, the relative technological parameters which influence on the curvature of the skin panel were analyzed. Finally, the prediction of springback and the compensation can be realized by controlling the process parameters.

  2. Direct numerical simulation of microcavitation processes in different bio environments

    NASA Astrophysics Data System (ADS)

    Ly, Kevin; Wen, Sy-Bor; Schmidt, Morgan S.; Thomas, Robert J.

    2017-02-01

    Laser-induced microcavitation refers to the rapid formation and expansion of a vapor bubble inside the bio-tissue when it is exposed to intense, pulsed laser energy. With the associated microscale dissection occurring within the tissue, laserinduced microcavitation is a common approach for high precision bio-surgeries. For example, laser-induced microcavitation is used for laser in-situ keratomileusis (LASIK) to precisely reshape the midstromal corneal tissue through excimer laser beam. Multiple efforts over the last several years have observed unique characteristics of microcavitions in biotissues. For example, it was found that the threshold energy for microcavitation can be significantly reduced when the size of the biostructure is increased. Also, it was found that the dynamics of microcavitation are significantly affected by the elastic modules of the bio-tissue. However, these efforts have not focused on the early events during microcavitation development. In this study, a direct numerical simulation of the microcavitation process based on equation of state of the biotissue was established. With the direct numerical simulation, we were able to reproduce the dynamics of microcavitation in water-rich bio tissues. Additionally, an experimental setup in deionized water and 10% PAA gel was made to verify the results of the simulation for early micro-cavitation formation for 10% Polyacrylamide (PAA) gel in deionized water.

  3. Forecasting Nonlinear Chaotic Time Series with Function Expression Method Based on an Improved Genetic-Simulated Annealing Algorithm

    PubMed Central

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011

  4. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  5. Full-Scale Direct Numerical Simulation of Two- and Three-Dimensional Instabilities and Rivulet Formulation in Heated Falling Films

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, S.; Ramaswamy, B.; Joo, S. W.

    1995-01-01

    A thin film draining on an inclined plate has been studied numerically using finite element method. Three-dimensional governing equations of continuity, momentum and energy with a moving boundary are integrated in an arbitrary Lagrangian Eulerian frame of reference. Kinematic equation is solved to precisely update interface location. Rivulet formation based on instability mechanism has been simulated using full-scale computation. Comparisons with long-wave theory are made to validate the numerical scheme. Detailed analysis of two- and three-dimensional nonlinear wave formation and spontaneous rupture forming rivulets under the influence of combined thermocapillary and surface-wave instabilities is performed.

  6. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  7. Experimental and numerical simulation of a rotor/stator interaction event localized on a single blade within an industrial high-pressure compressor

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Agrapart, Quentin; Millecamps, Antoine; Brunel, Jean-François

    2016-08-01

    This contribution addresses a confrontation between the experimental simulation of a rotor/stator interaction case initiated by structural contacts with numerical predictions made with an in-house numerical strategy. Contrary to previous studies carried out within the low-pressure compressor of an aircraft engine, this interaction is found to be non-divergent: high amplitudes of vibration are experimentally observed and numerically predicted over a short period of time. An in-depth analysis of experimental data first allows for a precise characterization of the interaction as a rubbing event involving the first torsional mode of a single blade. Numerical results are in good agreement with experimental observations: the critical angular speed, the wear patterns on the casing as well as the blade dynamics are accurately predicted. Through out the article, the in-house numerical strategy is also confronted to another numerical strategy that may be found in the literature for the simulation of rubbing events: key differences are underlined with respect to the prediction of non-linear interaction phenomena.

  8. A hypersonic aeroheating calculation method based on inviscid outer edge of boundary layer parameters

    NASA Astrophysics Data System (ADS)

    Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin

    2016-12-01

    This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.

  9. Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin

    A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.

  10. On the use of programmable hardware and reduced numerical precision in earth-system modeling.

    PubMed

    Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N

    2015-09-01

    Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.

  11. Three-dimensional transient numerical simulation for intake process in the engine intake port-valve-cylinder system.

    PubMed

    Luo, Ma-Ji; Chen, Guo-Hua; Ma, Yuan-Hao

    2003-01-01

    This paper presents a KIVA-3 code based numerical model for three-dimensional transient intake flow in the intake port-valve-cylinder system of internal combustion engine using body-fitted technique, which can be used in numerical study on internal combustion engine with vertical and inclined valves, and has higher calculation precision. A numerical simulation (on the intake process of a two-valve engine with a semi-sphere combustion chamber and a radial intake port) is provided for analysis of the velocity field and pressure field of different plane at different crank angles. The results revealed the formation of the tumble motion, the evolution of flow field parameters and the variation of tumble ratios as important information for the design of engine intake system.

  12. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  13. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  14. Determining wave direction using curvature parameters.

    PubMed

    de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista

    2016-01-01

    The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.

  15. Reliable low precision simulations in land surface models

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  16. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  17. Numerical simulation of three-component multiphase flows at high density and viscosity ratios using lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan

    2018-03-01

    In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.

  18. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  19. A new potential for the numerical simulations of electrolyte solutions on a hypersphere

    NASA Astrophysics Data System (ADS)

    Caillol, Jean-Michel

    1993-12-01

    We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.

  20. Numerical model estimating the capabilities and limitations of the fast Fourier transform technique in absolute interferometry

    NASA Astrophysics Data System (ADS)

    Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.

    1996-05-01

    A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.

  1. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

    NASA Technical Reports Server (NTRS)

    Powell, Richard W.

    1998-01-01

    This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

  2. Numerical Study of Underwater Explosions and Following Bubble Pulses

    NASA Astrophysics Data System (ADS)

    Abe, Atsushi; Katayama, Masahide; Murata, Kenji; Kato, Yukio; Tanaka, Katsumi

    2007-06-01

    Underwater explosions and following bubble pulses were simulated by using the hydrocode AUTODYN. The pressure gradient depended on the water depth was applied to the water, and the effects of the atmospheric pressure and the gravity on the bubble properties were investigated numerically. In the deep and shallow water depth cases the bubble properties or pressure histories obtained numerically were compared with the empirical formula or the experimental data. Not only the pressure gradient in the water and the atmospheric pressure but also the application of the JWL EOS to slow energy release of the non-ideal explosive (Miller model) were found to be of great importance to simulate the generation of the bubble pulse precisely. Although the gravitational term during the dynamic analysis can be neglected in numerical analyses for very short time phenomena, it is indispensable to simulate the buoyancy of the bubble because the time range of the bubble behavior is some hundred times longer than that of the explosion phenomena.

  3. Numerical and experimental approaches to simulate soil clogging in porous media

    NASA Astrophysics Data System (ADS)

    Kanarska, Yuliya; LLNL Team

    2012-11-01

    Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. The Department of Homeland Security Science and Technology Directorate provided funding for this research.

  4. Rigorous vector wave propagation for arbitrary flat media

    NASA Astrophysics Data System (ADS)

    Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.

    2017-08-01

    Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.

  5. Numerical binary black hole mergers in dynamical Chern-Simons gravity: Scalar field

    NASA Astrophysics Data System (ADS)

    Okounkova, Maria; Stein, Leo C.; Scheel, Mark A.; Hemberger, Daniel A.

    2017-08-01

    Testing general relativity in the nonlinear, dynamical, strong-field regime of gravity is one of the major goals of gravitational wave astrophysics. Performing precision tests of general relativity (GR) requires numerical inspiral, merger, and ringdown waveforms for binary black hole (BBH) systems in theories beyond GR. Currently, GR and scalar-tensor gravity are the only theories amenable to numerical simulations. In this article, we present a well-posed perturbation scheme for numerically integrating beyond-GR theories that have a continuous limit to GR. We demonstrate this scheme by simulating BBH mergers in dynamical Chern-Simons gravity (dCS), to linear order in the perturbation parameter. We present mode waveforms and energy fluxes of the dCS pseudoscalar field from our numerical simulations. We find good agreement with analytic predictions at early times, including the absence of pseudoscalar dipole radiation. We discover new phenomenology only accessible through numerics: a burst of dipole radiation during merger. We also quantify the self-consistency of the perturbation scheme. Finally, we estimate bounds that GR-consistent LIGO detections could place on the new dCS length scale, approximately ℓ≲O (10 ) km .

  6. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  7. Short-time self-diffusion coefficient of a particle in a colloidal suspension bounded by a microchannel: Virial expansions and simulation

    NASA Astrophysics Data System (ADS)

    Kȩdzierski, Marcin; Wajnryb, Eligiusz

    2011-10-01

    Self-diffusion of colloidal particles confined to a cylindrical microchannel is considered theoretically and numerically. Virial expansion of the self-diffusion coefficient is performed. Two-body and three-body hydrodynamic interactions are evaluated with high precision using the multipole method. The multipole expansion algorithm is also used to perform numerical simulations of the self-diffusion coefficient, valid for all possible particle packing fractions. Comparison with earlier results shows that the widely used method of reflections is insufficient for calculations of hydrodynamic interactions even for small packing fractions and small particles radii, contrary to the prevalent opinion.

  8. A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM

    NASA Astrophysics Data System (ADS)

    Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui

    2014-12-01

    Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.

  9. Development and simulation of microfluidic Wheatstone bridge for high-precision sensor

    NASA Astrophysics Data System (ADS)

    Shipulya, N. D.; Konakov, S. A.; Krzhizhanovskaya, V. V.

    2016-08-01

    In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.

  10. Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Jones, A. D.; Rhoades, A.

    2017-12-01

    Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.

  11. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  12. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  13. Microfluidic proportional flow controller

    PubMed Central

    Prentice-Mott, Harrison; Toner, Mehmet; Irimia, Daniel

    2011-01-01

    Precise flow control in microfluidic chips is important for many biochemical assays and experiments at microscale. While several technologies for controlling fluid flow have been implemented either on- or off-chip, these can provide either high-speed or high-precision control, but seldom could accomplish both at the same time. Here we describe a new on-chip, pneumatically activated flow controller that allows for fast and precise control of the flow rate through a microfluidic channel. Experimental results show that the new proportional flow controllers exhibited a response time of approximately 250 ms, while our numerical simulations suggest that faster actuation down to approximately 50 ms could be achieved with alternative actuation schemes. PMID:21874096

  14. Numerical simulations of epitaxial growth process in MOVPE reactor as a tool for design of modern semiconductors for high power electronics

    NASA Astrophysics Data System (ADS)

    Skibinski, Jakub; Caban, Piotr; Wejrzanowski, Tomasz; Kurzydlowski, Krzysztof J.

    2014-10-01

    In the present study numerical simulations of epitaxial growth of gallium nitride in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Epitaxial growth means crystal growth that progresses while inheriting the laminar structure and the orientation of substrate crystals. One of the technological problems is to obtain homogeneous growth rate over the main deposit area. Since there are many agents influencing reaction on crystal area such as temperature, pressure, gas flow or reactor geometry, it is difficult to design optimal process. According to the fact that it's impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during crystal growth, modeling is the only solution to understand the process precisely. Numerical simulations allow to understand the epitaxial process by calculation of heat and mass transfer distribution during growth of gallium nitride. Including chemical reactions in numerical model allows to calculate the growth rate of the substrate and estimate the optimal process conditions for obtaining the most homogeneous product.

  15. Clinical value of homodynamic numerical simulation applied in the treatment of cerebral aneurysm.

    PubMed

    Zhang, Hailin; Li, Li; Cheng, Chongjie; Sun, Xiaochuan

    2017-12-01

    Our objective was to evaluate the clinical value of numerical simulation in diagnosing cerebral aneurysm based on the analysis of numerical simulation of hemodynamic model. The experimental method used was the numerical model of cerebral aneurysm hemodynamic, and the numerical value of blood flow at each point was analyzed. The results showed that, the wall shear stress (WSS) value on the top of CA1 was significantly lower than that of the top (P<0.05), the WSS value of each point on the CA2 tumor was significantly lower than that of tumor neck (P<0.05); the pressure value on the tumor top and tumor neck between CA1 and CA2 had no significant difference (P>0.05); the unsteady index of shear (UIS) value at the points of 20 had distinctly changed, the wave range was 0.6-1.5; the unsteady index of pressure value of every point was significantly lower than UIS value, the wave range was 0.25-0.40. In conclusion, the application of cerebral aneurysm hemodynamic research can help doctors to diagnose cerebral aneurysm more precisely and to grasp the opportunity of treatment during the formulating of the treatment strategies.

  16. Numerical Simulations of the Digital Microfluidic Manipulation of Single Microparticles.

    PubMed

    Lan, Chuanjin; Pal, Souvik; Li, Zhen; Ma, Yanbao

    2015-09-08

    Single-cell analysis techniques have been developed as a valuable bioanalytical tool for elucidating cellular heterogeneity at genomic, proteomic, and cellular levels. Cell manipulation is an indispensable process for single-cell analysis. Digital microfluidics (DMF) is an important platform for conducting cell manipulation and single-cell analysis in a high-throughput fashion. However, the manipulation of single cells in DMF has not been quantitatively studied so far. In this article, we investigate the interaction of a single microparticle with a liquid droplet on a flat substrate using numerical simulations. The droplet is driven by capillary force generated from the wettability gradient of the substrate. Considering the Brownian motion of microparticles, we utilize many-body dissipative particle dynamics (MDPD), an off-lattice mesoscopic simulation technique, in this numerical study. The manipulation processes (including pickup, transport, and drop-off) of a single microparticle with a liquid droplet are simulated. Parametric studies are conducted to investigate the effects on the manipulation processes from the droplet size, wettability gradient, wetting properties of the microparticle, and particle-substrate friction coefficients. The numerical results show that the pickup, transport, and drop-off processes can be precisely controlled by these parameters. On the basis of the numerical results, a trap-free delivery of a hydrophobic microparticle to a destination on the substrate is demonstrated in the numerical simulations. The numerical results not only provide a fundamental understanding of interactions among the microparticle, the droplet, and the substrate but also demonstrate a new technique for the trap-free immobilization of single hydrophobic microparticles in the DMF design. Finally, our numerical method also provides a powerful design and optimization tool for the manipulation of microparticles in DMF systems.

  17. Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow

    NASA Astrophysics Data System (ADS)

    Ulerich, Rhys; Moser, Robert D.

    2012-11-01

    To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  18. Faster and exact implementation of the continuous cellular automaton for anisotropic etching simulations

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-02-01

    The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.

  19. Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP

    NASA Astrophysics Data System (ADS)

    Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.

    2017-12-01

    The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.

  20. Hardware Simulations of Spacecraft Attitude Synchronization Using Lyapunov-Based Controllers

    NASA Astrophysics Data System (ADS)

    Jung, Juno; Park, Sang-Young; Eun, Youngho; Kim, Sung-Woo; Park, Chandeok

    2018-04-01

    In the near future, space missions with multiple spacecraft are expected to replace traditional missions with a single large spacecraft. These spacecraft formation flying missions generally require precise knowledge of relative position and attitude between neighboring agents. In this study, among the several challenging issues, we focus on the technique to control spacecraft attitude synchronization in formation. We develop a number of nonlinear control schemes based on the Lyapunov stability theorem and considering special situations: full-state feedback control, full-state feedback control with unknown inertia parameters, and output feedback control without angular velocity measurements. All the proposed controllers offer absolute and relative control using reaction wheel assembly for both regulator and tracking problems. In addition to the numerical simulations, an air-bearing-based hardware-in-the-loop (HIL) system is used to verify the proposed control laws in real-time hardware environments. The pointing errors converge to 0.5{°} with numerical simulations and to 2{°} using the HIL system. Consequently, both numerical and hardware simulations confirm the performance of the spacecraft attitude synchronization algorithms developed in this study.

  1. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  2. Simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1987-01-01

    Computer simulations were performed for a Geopotential Research Mission (GRM) to enable the study of the gravitational sensitivity of the range rate measurements between the two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulations were conducted with two satellites in near circular, frozen orbits at 160 km altitudes separated by 300 km. High precision numerical integration of the polar orbits were used with a gravitational field complete to degree and order 360. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The results presented cover the most recent simulation, S8703, and includes a summary of the numerical integration of the simulated trajectories, a summary of the requirements to compute nominal reference trajectories to meet the initial orbit determination requirements for the recovery of the geopotential, an analysis of the nature of the one way integrated Doppler measurements associated with the simulation, and a discussion of the data set to be made available.

  3. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  4. Numerical simulation of fluid flow and heat transfer in enhanced copper tube

    NASA Astrophysics Data System (ADS)

    Rahman, M. M.; Zhen, T.; Kadir, A. K.

    2013-06-01

    Inner grooved tube is enhanced with grooves by increasing the inner surface area. Due to its high efficiency of heat transfer, it is used widely in power generation, air conditioning and many other applications. Heat exchanger is one of the example that uses inner grooved tube to enhance rate heat transfer. Precision in production of inner grooved copper tube is very important because it affects the tube's performance due to various tube parameters. Therefore, it is necessary to carry out analysis in optimizing tube performance prior to production in order to avoid unnecessary loss. The analysis can be carried out either through experimentation or numerical simulation. However, experimental study is too costly and takes longer time in gathering necessary information. Therefore, numerical simulation is conducted instead of experimental research. Firstly, the model of inner grooved tube was generated using SOLIDWORKS. Then it was imported into GAMBIT for healing, followed by meshing, boundary types and zones settings. Next, simulation was done in FLUENT where all the boundary conditions are set. The simulation results were observed and compared with published experimental results. It showed that heat transfer enhancement in range of 649.66% to 917.22% of inner grooved tube compared to plain tube.

  5. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  6. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  7. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  8. A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems

    NASA Astrophysics Data System (ADS)

    Cortes, J.; Debussche, A.; Toumi, I.

    1998-12-01

    Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.

  9. Two dimensional PMMA nanofluidic device fabricated by hot embossing and oxygen plasma assisted thermal bonding methods

    NASA Astrophysics Data System (ADS)

    Yin, Zhifu; Sun, Lei; Zou, Helin; Cheng, E.

    2015-05-01

    A method for obtaining a low-cost and high-replication precision two-dimensional (2D) nanofluidic device with a polymethyl methacrylate (PMMA) sheet is proposed. To improve the replication precision of the 2D PMMA nanochannels during the hot embossing process, the deformation of the PMMA sheet was analyzed by a numerical simulation method. The constants of the generalized Maxwell model used in the numerical simulation were calculated by experimental compressive creep curves based on previously established fitting formula. With optimized process parameters, 176 nm-wide and 180 nm-deep nanochannels were successfully replicated into the PMMA sheet with a replication precision of 98.2%. To thermal bond the 2D PMMA nanochannels with high bonding strength and low dimensional loss, the parameters of the oxygen plasma treatment and thermal bonding process were optimized. In order to measure the dimensional loss of 2D nanochannels after thermal bonding, a dimension loss evaluating method based on the nanoindentation experiments was proposed. According to the dimension loss evaluating method, the total dimensional loss of 2D nanochannels was 6 nm and 21 nm in width and depth, respectively. The tensile bonding strength of the 2D PMMA nanofluidic device was 0.57 MPa. The fluorescence images demonstrate that there was no blocking or leakage over the entire microchannels and nanochannels.

  10. Self-position estimation using terrain shadows for precise planetary landing

    NASA Astrophysics Data System (ADS)

    Kuga, Tomoki; Kojima, Hirohisa

    2018-07-01

    In recent years, the investigation of moons and planets has attracted increasing attention in several countries. Furthermore, recently developed landing systems are now expected to reach more scientifically interesting areas close to hazardous terrain, requiring precise landing capabilities within a 100 m range of the target point. To achieve this, terrain-relative navigation (capable of estimating the position of a lander relative to the target point on the ground surface is actively being studied as an effective method for achieving highly accurate landings. This paper proposes a self-position estimation method using shadows on the terrain based on edge extraction from image processing algorithms. The effectiveness of the proposed method is validated through numerical simulations using images generated from a digital elevation model of simulated terrains.

  11. Fast Neural Solution Of A Nonlinear Wave Equation

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Toomarian, Nikzad

    1996-01-01

    Neural algorithm for simulation of class of nonlinear wave phenomena devised. Numerically solves special one-dimensional case of Korteweg-deVries equation. Intended to be executed rapidly by neural network implemented as charge-coupled-device/charge-injection device, very-large-scale integrated-circuit analog data processor of type described in "CCD/CID Processors Would Offer Greater Precision" (NPO-18972).

  12. Rotating black hole solutions in relativistic analogue gravity

    NASA Astrophysics Data System (ADS)

    Giacomelli, Luca; Liberati, Stefano

    2017-09-01

    Simulation and experimental realization of acoustic black holes in analogue gravity systems have lead to a novel understanding of relevant phenomena such as Hawking radiation or superradiance. We explore here the possibility of using relativistic systems for simulating rotating black hole solutions and possibly get an acoustic analogue of a Kerr black hole. In doing so, we demonstrate a precise relation between nonrelativistic and relativistic solutions and provide a new class of vortex solutions for relativistic systems. Such solutions might be used in the future as a test bed in numerical simulations as well as concrete experiments.

  13. Cosmological neutrino simulations at extreme scale

    DOE PAGES

    Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...

    2017-08-01

    Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less

  14. LACIS-T - A moist air wind tunnel for investigating the interactions between cloud microphysics and turbulence

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Voigtländer, Jens; Siebert, Holger; Desai, Neel; Shaw, Raymond; Chang, Kelken; Krueger, Steven; Schumacher, Jörg; Stratmann, Frank

    2017-11-01

    Turbulence - cloud droplet interaction processes have been investigated primarily through numerical simulation and field measurements over the last ten years. However, only in the laboratory we can be confident in our knowledge of initial and boundary conditions, and are able to measure for extended times under statistically stationary and repeatable conditions. Therefore, the newly built turbulent wind tunnel LACIS-T (Turbulent Leipzig Aerosol Cloud Interaction Simulator) is an ideal facility for pursuing mechanistic understanding of these processes. Within the tunnel we are able to adjust precisely controlled turbulent temperature and humidity fields so as to achieve supersaturation levels allowing for detailed investigations of the interactions between cloud microphysical processes (e.g., cloud droplet activation) and the turbulent flow, under well-defined and reproducible laboratory conditions. We will present the fundamental operating principle, first results from ongoing characterization efforts, numerical simulations as well as first droplet activation experiments.

  15. The CFS-PML in numerical simulation of ATEM

    NASA Astrophysics Data System (ADS)

    Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi

    2017-01-01

    In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.

  16. Proximity Operations for Space Situational Awareness Spacecraft Rendezvous and Maneuvering using Numerical Simulations and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Carrico, T.; Langster, T.; Carrico, J.; Alfano, S.; Loucks, M.; Vallado, D.

    The authors present several spacecraft rendezvous and close proximity maneuvering techniques modeled with a high-precision numerical integrator using full force models and closed loop control with a Fuzzy Logic intelligent controller to command the engines. The authors document and compare the maneuvers, fuel use, and other parameters. This paper presents an innovative application of an existing capability to design, simulate and analyze proximity maneuvers; already in use for operational satellites performing other maneuvers. The system has been extended to demonstrate the capability to develop closed loop control laws to maneuver spacecraft in close proximity to another, including stand-off, docking, lunar landing and other operations applicable to space situational awareness, space based surveillance, and operational satellite modeling. The fully integrated end-to-end trajectory ephemerides are available from the authors in electronic ASCII text by request. The benefits of this system include: A realistic physics-based simulation for the development and validation of control laws A collaborative engineering environment for the design, development and tuning of spacecraft law parameters, sizing actuators (i.e., rocket engines), and sensor suite selection. An accurate simulation and visualization to communicate the complexity, criticality, and risk of spacecraft operations. A precise mathematical environment for research and development of future spacecraft maneuvering engineering tasks, operational planning and forensic analysis. A closed loop, knowledge-based control example for proximity operations. This proximity operations modeling and simulation environment will provide a valuable adjunct to programs in military space control, space situational awareness and civil space exploration engineering and decision making processes.

  17. Modeling and FE Simulation of Quenchable High Strength Steels Sheet Metal Hot Forming Process

    NASA Astrophysics Data System (ADS)

    Liu, Hongsheng; Bao, Jun; Xing, Zhongwen; Zhang, Dejin; Song, Baoyu; Lei, Chengxi

    2011-08-01

    High strength steel (HSS) sheet metal hot forming process is investigated by means of numerical simulations. With regard to a reliable numerical process design, the knowledge of the thermal and thermo-mechanical properties is essential. In this article, tensile tests are performed to examine the flow stress of the material HSS 22MnB5 at different strains, strain rates, and temperatures. Constitutive model based on phenomenological approach is developed to describe the thermo-mechanical properties of the material 22MnB5 by fitting the experimental data. A 2D coupled thermo-mechanical finite element (FE) model is developed to simulate the HSS sheet metal hot forming process for U-channel part. The ABAQUS/explicit model is used conduct the hot forming stage simulations, and ABAQUS/implicit model is used for accurately predicting the springback which happens at the end of hot forming stage. Material modeling and FE numerical simulations are carried out to investigate the effect of the processing parameters on the hot forming process. The processing parameters have significant influence on the microstructure of U-channel part. The springback after hot forming stage is the main factor impairing the shape precision of hot-formed part. The mechanism of springback is advanced and verified through numerical simulations and tensile loading-unloading tests. Creep strain is found in the tensile loading-unloading test under isothermal condition and has a distinct effect on springback. According to the numerical and experimental results, it can be concluded that springback is mainly caused by different cooling rats and the nonhomogengeous shrink of material during hot forming process, the creep strain is the main factor influencing the amount of the springback.

  18. Universality of the logarithmic velocity profile restored

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo

    2017-11-01

    The logarithmic velocity profile of wall-bounded turbulent flow, despite its widespread adoption in research and in teaching, exhibits discrepancies with both experiments and numerical simulations that have been repeatedly observed in the literature; serious doubts ensued about its precise form and universality, leading to the formulation of alternate theories and hindering ongoing experimental efforts to measure von Kármán's constant. By comparing different geometries of pipe, plane-channel and plane-Couette flow, here we show that such discrepancies can be physically interpreted, and analytically accounted for, through an equally universal higher-order correction caused by the pressure gradient. Inclusion of this term produces a tenfold increase in the adherence of the predicted profile to existing experiments and numerical simulations in all three geometries. Universality of the logarithmic law then emerges beyond doubt and a satisfactorily simple formulation is established. Among the consequences of this formulation is a strongly increased confidence that the Reynolds number of present-day direct numerical simulations is actually high enough to uncover asymptotic behaviour, but research efforts are still needed in order to increase their accuracy.

  19. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  20. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  1. The least channel capacity for chaos synchronization.

    PubMed

    Wang, Mogei; Wang, Xingyuan; Liu, Zhenzhen; Zhang, Huaguang

    2011-03-01

    Recently researchers have found that a channel with capacity exceeding the Kolmogorov-Sinai entropy of the drive system (h(KS)) is theoretically necessary and sufficient to sustain the unidirectional synchronization to arbitrarily high precision. In this study, we use symbolic dynamics and the automaton reset sequence to distinguish the information that is required in identifying the current drive word and obtaining the synchronization. Then, we show that the least channel capacity that is sufficient to transmit the distinguished information and attain the synchronization of arbitrarily high precision is h(KS). Numerical simulations provide support for our conclusions.

  2. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  3. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  5. Hybrid thrusters and reaction wheels strategy for large angle rapid reorientation with high precision

    NASA Astrophysics Data System (ADS)

    Ye, Dong; Sun, Zhaowei; Wu, Shunan

    2012-08-01

    The quaternion-based, high precision, large angle rapid reorientation of rigid spacecraft is the main problem investigated in this study. The operation is accomplished via a hybrid thrusters and reaction wheels strategy where thrusters are engaged in providing a primary maneuver torque in open loop, while reaction wheels provide fine control torque to achieve high precision in closed-loop control. The inaccuracy of thrusters is handled by a variable structure control (VSC). In addition, a signum function is mixed in the switching surface in VSC to produce a maneuver to the reference attitude trajectory in a shortest distance. Detailed proofs and numerical simulation examples are presented to illustrate all the technical aspects of this work.

  6. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  7. Computationally efficient method for optical simulation of solar cells and their applications

    NASA Astrophysics Data System (ADS)

    Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.

    2013-01-01

    This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.

  8. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  9. Augmentation method of XPNAV in Mars orbit based on Phobos and Deimos observations

    NASA Astrophysics Data System (ADS)

    Rong, Jiao; Luping, Xu; Zhang, Hua; Cong, Li

    2016-11-01

    Autonomous navigation for Mars probe spacecraft is required to reduce the operation costs and enhance the navigation performance in the future. X-ray pulsar-based navigation (XPNAV) is a potential candidate to meet this requirement. This paper addresses the use of the Mars' natural satellites to improve XPNAV for Mars probe spacecraft. Two observation variables of the field angle and natural satellites' direction vectors of Mars are added into the XPNAV positioning system. The measurement model of field angle and direction vectors is formulated by processing satellite image of Mars obtained from optical camera. This measurement model is integrated into the spacecraft orbit dynamics to build the filter model. In order to estimate position and velocity error of the spacecraft and reduce the impact of the system noise on navigation precision, an adaptive divided difference filter (ADDF) is applied. Numerical simulation results demonstrate that the performance of ADDF is better than Unscented Kalman Filter (UKF) DDF and EKF. In view of the invisibility of Mars' natural satellites in some cases, a visibility condition analysis is given and the augmented XPNAV in a different visibility condition is numerically simulated. The simulation results show that the navigation precision is evidently improved by using the augmented XPNAV based on the field angle and natural satellites' direction vectors of Mars in a comparison with the conventional XPNAV.

  10. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-04-01

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  11. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  12. Matter power spectrum and the challenge of percent accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less

  13. Faster and More Accurate Transport Procedures for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.

    2010-01-01

    Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  14. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Investigation of flow characteristics of a single and two-adjacent natural draft dry cooling towers under cross wind condition

    NASA Astrophysics Data System (ADS)

    Mekanik, Abolghasem; Soleimani, Mohsen

    2007-11-01

    Wind effect on natural draught cooling towers has a very complex physics. The fluid flow and temperature distribution around and in a single and two adjacent (tandem and side by side) dry-cooling towers under cross wind are studied numerically in the present work. Cross-wind can significantly reduce cooling efficiency of natural-draft dry-cooling towers, and the adjacent towers can affect the cooling efficiency of both. In this paper we will present a complex computational model involving more than 750,000 finite volume cells under precisely defined boundary condition. Since the flow is turbulent, the standard k-ɛ turbulence model is used. The numerical results are used to estimate the heat transfer between radiators of the tower and air surrounding it. The numerical simulation explained the main reason for decline of the thermo-dynamical performance of dry-cooling tower under cross wind. In this paper, the incompressible fluid flow is simulated, and the flow is assumed steady and three-dimensional.

  16. Phasemeter core for intersatellite laser heterodyne interferometry: modelling, simulations and experiments

    NASA Astrophysics Data System (ADS)

    Gerberding, Oliver; Sheard, Benjamin; Bykov, Iouri; Kullmann, Joachim; Esteban Delgado, Juan Jose; Danzmann, Karsten; Heinzel, Gerhard

    2013-12-01

    Intersatellite laser interferometry is a central component of future space-borne gravity instruments like Laser Interferometer Space Antenna (LISA), evolved LISA, NGO and future geodesy missions. The inherently small laser wavelength allows us to measure distance variations with extremely high precision by interfering a reference beam with a measurement beam. The readout of such interferometers is often based on tracking phasemeters, which are able to measure the phase of an incoming beatnote with high precision over a wide range of frequencies. The implementation of such phasemeters is based on all digital phase-locked loops (ADPLL), hosted in FPGAs. Here, we present a precise model of an ADPLL that allows us to design such a readout algorithm and we support our analysis by numerical performance measurements and experiments with analogue signals.

  17. Field structure at the ends of a precision superconducting dipole magnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doinikov, N.I.; Eregin, V.E.; Sychevskii, S.E.

    1983-10-01

    Results are reported from a numerical simulation of the spatial field of a superconducting dipole magnet with a saddle-shaped winding employed in an accelerating and storage system (ASS). It is shown that the peak field in the winding can be kept to a fixed level and edge nonlinearities of the field can be suppressed by suitably shaping the front portions of the magnet.

  18. Selecting appropriate singular values of transmission matrix to improve precision of incident wavefront retrieval

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Du, Jinglei

    2018-06-01

    A method of selecting appropriate singular values of the transmission matrix to improve the precision of incident wavefront retrieval in focusing light through scattering media is proposed. The optimal singular values selected by this method can reduce the degree of ill-conditionedness of the transmission matrix effectively, which indicates that the incident wavefront retrieved from the optimal set of singular values is more accurate than the incident wavefront retrieved from other sets of singular values. The validity of this method is verified by numerical simulation and actual measurements of the incident wavefront of coherent light through ground glass.

  19. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  20. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  1. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  2. Study on longitudinal force simulation of heavy-haul train

    NASA Astrophysics Data System (ADS)

    Chang, Chongyi; Guo, Gang; Wang, Junbiao; Ma, Yingming

    2017-04-01

    The longitudinal dynamics model of heavy-haul trains and air brake model used in the longitudinal train dynamics (LTDs) are established. The dry friction damping hysteretic characteristic of steel friction draft gears is simulated by the equation which describes the suspension forces in truck leaf springs. The model of draft gears introduces dynamic loading force, viscous friction of steel friction and the damping force. Consequently, the numerical model of the draft gears is brought forward. The equation of LTDs is strongly non-linear. In order to solve the response of the strongly non-linear system, the high-precision and equilibrium iteration method based on the Newmark-β method is presented and numerical analysis is made. Longitudinal dynamic forces of the 20,000 tonnes heavy-haul train are tested, and models and solution method provided are verified by the test results.

  3. Numerical simulation of polishing U-tube based on solid-liquid two-phase

    NASA Astrophysics Data System (ADS)

    Li, Jun-ye; Meng, Wen-qing; Wu, Gui-ling; Hu, Jing-lei; Wang, Bao-zuo

    2018-03-01

    As the advanced technology to solve the ultra-precision machining of small hole structure parts and complex cavity parts, the abrasive grain flow processing technology has the characteristics of high efficiency, high quality and low cost. So this technology in many areas of precision machining has an important role. Based on the theory of solid-liquid two-phase flow coupling, a solid-liquid two-phase MIXTURE model is used to simulate the abrasive flow polishing process on the inner surface of U-tube, and the temperature, turbulent viscosity and turbulent dissipation rate in the process of abrasive flow machining of U-tube were compared and analyzed under different inlet pressure. In this paper, the influence of different inlet pressure on the surface quality of the workpiece during abrasive flow machining is studied and discussed, which provides a theoretical basis for the research of abrasive flow machining process.

  4. Simulation and experimental analysis of nanoindentation and mechanical properties of amorphous NiAl alloys.

    PubMed

    Wang, Chih-Hao; Fang, Te-Hua; Cheng, Po-Chien; Chiang, Chia-Chin; Chao, Kuan-Chi

    2015-06-01

    This paper used numerical and experimental methods to investigate the mechanical properties of amorphous NiAl alloys during the nanoindentation process. A simulation was performed using the many-body tight-binding potential method. Temperature, plastic deformation, elastic recovery, and hardness were evaluated. The experimental method was based on nanoindentation measurements, allowing a precise prediction of Young's modulus and hardness values for comparison with the simulation results. The indentation simulation results showed a significant increase of NiAl hardness and elastic recovery with increasing Ni content. Furthermore, the results showed that hardness and Young's modulus increase with increasing Ni content. The simulation results are in good agreement with the experimental results. Adhesion test of amorphous NiAl alloys at room temperature is also described in this study.

  5. The electrical conductivity of in vivo human uterine fibroids.

    PubMed

    DeLonzor, Russ; Spero, Richard K; Williams, Joseph J

    2011-01-01

    The purpose of this study was to determine the value of electrical conductivity that can be used for numerical modelling in vivo radiofrequency ablation (RFA) treatments of human uterine fibroids. No experimental electrical conductivity data have previously been reported for human uterine fibroids. In this study electrical data (voltage) from selected in vivo clinical procedures on human uterine fibroids were used to numerically model the treatments. Measured versus calculated power dissipation profiles were compared to determine uterine fibroid electrical conductivity. Numerical simulations were conducted utilising a wide range of values for tissue thermal conductivity, heat capacity and blood perfusion coefficient. The simulations demonstrated that power dissipation was insensitive to the exact values of these parameters for the simulated geometry, treatment duration, and power level. Consequently, it was possible to determine tissue electrical conductivity without precise knowledge of the values for these parameters. Results of this study showed that an electrical conductivity for uterine fibroids of 0.305 S/m at 37°C and a temperature coefficient of 0.2%/°C can be used for modelling Radio Frequency Ablation of human uterine fibroids at a frequency of 460 kHz for temperatures from 37°C to 100°C.

  6. Numerical investigation of interactions between marine atmospheric boundary layer and offshore wind farm

    NASA Astrophysics Data System (ADS)

    Lyu, Pin; Chen, Wenli; Li, Hui; Shen, Lian

    2017-11-01

    In recent studies, Yang, Meneveau & Shen (Physics of Fluids, 2014; Renewable Energy, 2014) developed a hybrid numerical framework for simulation of offshore wind farm. The framework consists of simulation of nonlinear surface waves using a high-order spectral method, large-eddy simulation of wind turbulence on a wave-surface-fitted curvilinear grid, and an actuator disk model for wind turbines. In the present study, several more precise wind turbine models, including the actuator line model, actuator disk model with rotation, and nacelle model, are introduced into the computation. Besides offshore wind turbines on fixed piles, the new computational framework has the capability to investigate the interaction among wind, waves, and floating wind turbines. In this study, onshore, offshore fixed pile, and offshore floating wind farms are compared in terms of flow field statistics and wind turbine power extraction rate. The authors gratefully acknowledge financial support from China Scholarship Council (No. 201606120186) and the Institute on the Environment of University of Minnesota.

  7. Simulation of energy buildups in solid-state regenerative amplifiers for 2-μm emitting lasers

    NASA Astrophysics Data System (ADS)

    Springer, Ramon; Alexeev, Ilya; Heberle, Johannes; Pflaum, Christoph

    2018-02-01

    A numerical model for solid-state regenerative amplifiers is presented, which is able to precisely simulate the quantitative energy buildup of stretched femtosecond pulses over passed roundtrips in the cavity. In detail, this model is experimentally validated with a Ti:Sapphire regenerative amplifier. Additionally, the simulation of a Ho:YAG based regenerative amplifier is conducted and compared to experimental data from literature. Furthermore, a bifurcation study of the investigated Ho:YAG system is performed, which leads to the identification of stable and instable operation regimes. The presented numerical model exhibits a well agreement to the experimental results from the Ti:Sapphire regenerative amplifier. Also, the gained pulse energy from the Ho:YAG system could be approximated closely, while the mismatch is explained with the monochromatic calculation of pulse amplification. Since the model is applicable to other solid-state gain media, it allows for the efficient design of future amplification systems based on regenerative amplification.

  8. Efficient Raman sideband cooling of trapped ions to their motional ground state

    NASA Astrophysics Data System (ADS)

    Che, H.; Deng, K.; Xu, Z. T.; Yuan, W. H.; Zhang, J.; Lu, Z. H.

    2017-07-01

    Efficient cooling of trapped ions is a prerequisite for various applications of the ions in precision spectroscopy, quantum information, and coherence control. Raman sideband cooling is an effective method to cool the ions to their motional ground state. We investigate both numerically and experimentally the optimization of Raman sideband cooling strategies and propose an efficient one, which can simplify the experimental setup as well as reduce the number of cooling pulses. Several cooling schemes are tested and compared through numerical simulations. The simulation result shows that the fixed-width pulses and varied-width pulses have almost the same efficiency for both the first-order and the second-order Raman sideband cooling. The optimized strategy is verified experimentally. A single 25Mg+ ion is trapped in a linear Paul trap and Raman sideband cooled, and the achieved average vibrational quantum numbers under different cooling strategies are evaluated. A good agreement between the experimental result and the simulation result is obtained.

  9. Numerical and experimental study on the wave attenuation in bone--FDTD simulation of ultrasound propagation in cancellous bone.

    PubMed

    Nagatani, Yoshiki; Mizuno, Katsunori; Saeki, Takashi; Matsukawa, Mami; Sakaguchi, Takefumi; Hosoi, Hiroshi

    2008-11-01

    In cancellous bone, longitudinal waves often separate into fast and slow waves depending on the alignment of bone trabeculae in the propagation path. This interesting phenomenon becomes an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. Since the fast wave mainly propagates in trabeculae, this wave is considered to reflect the structure of trabeculae. For a new diagnosis method using the information of this fast wave, therefore, it is necessary to understand the generation mechanism and propagation behavior precisely. In this study, the generation process of fast wave was examined by numerical simulations using elastic finite-difference time-domain (FDTD) method and experimental measurements. As simulation models, three-dimensional X-ray computer tomography (CT) data of actual bone samples were used. Simulation and experimental results showed that the attenuation of fast wave was always higher in the early state of propagation, and they gradually decreased as the wave propagated in bone. This phenomenon is supposed to come from the complicated propagating paths of fast waves in cancellous bone.

  10. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  11. Simulation and analysis of a geopotential research mission

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.

    1986-01-01

    A computer simulation was performed for a Geopotential Research Mission (GRM) to enable study of the gravitational sensitivity of the range/rate measurement between two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulation, identified as SGRM 8511, was conducted with two satellites in near circular, frozen orbits at 160 km altitude and separated by 300 km. High precision numerical integration of the polar orbits was used with a gravitational field complete to degree and order 180 coefficients and to degree 300 in orders 0 to 10. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The characteristics of the simulation and the nature of the results are described.

  12. Water-hammer pressure waves interaction at cross-section changes in series in viscoelastic pipes

    NASA Astrophysics Data System (ADS)

    Meniconi, S.; Brunone, B.; Ferrante, M.

    2012-08-01

    In view of scarcity of both experimental data and numerical models concerning transient behavior of cross-section area changes in pressurized liquid flow, the paper presents laboratory data and numerical simulation of the interaction of a surge wave with a partial blockage by a valve, a single pipe contraction or expansion and a series of pipe contraction/expansion in close proximity.With regard to a single change of cross-section area, laboratory data point out the completely different behavior with respect to one of the partially closed in-line valves with the same area ratio. In fact, for the former the pressure wave interaction is not regulated by the steady-state local head loss. With regard to partial blockages, transient tests have shown that the smaller the length, the more intense the overlapping of pressure waves due to the expansion and contraction in series.Numerically, the need for taking into account both the viscoelasticity and unsteady friction is demonstrated, since the classical water-hammer theory does not simulate the relevant damping of pressure peaks and gives rise to a time shifting between numerical and laboratory data. The transient behavior of a single local head loss has been checked by considering tests carried out in a system with a partially closed in-line valve. As a result, the reliability of the quasi steady-state approach for local head loss simulation has been demonstrated in viscoelastic pipes. The model parameters obtained on the basis of transients carried out in single pipe systems have then been used to simulate transients in the more complex pipe systems. These numerical experiments show the great importance of the length of the small-bore pipe with respect to one of the large-bore pipes. Precisely, until a gradually flow establishes in the small-bore pipe, the smaller such a length, the better the quality of the numerical simulation.

  13. Correction coefficient for see-through labyrinth seal

    NASA Astrophysics Data System (ADS)

    Hasnedl, Dan; Epikaridis, Premysl; Slama, Vaclav

    In a steam turbine design, the flow-part design and blade shapes are influenced by the design mass-flow through each turbine stage. If it would be possible to predict this mass-flow more precisely, it will result in optimized design and therefore an efficiency benefit. This article is concerned with improving the prediction of losses caused by the seal leakage. In the common simulation of the thermodynamic cycle of a steam turbine, analytical formulas are used in order to simulate the seal leakage. Therefore, this article describes an improvement of analytical formulas used in a turbine heat balance calculation. The results are verified by numerical simulations and experimental data from the steam test rig.

  14. Smart reconfigurable parabolic space antenna for variable electromagnetic patterns

    NASA Astrophysics Data System (ADS)

    Kalra, Sahil; Datta, Rituparna; Munjal, B. S.; Bhattacharya, Bishakh

    2018-02-01

    An application of reconfigurable parabolic space antenna for satellite is discussed in this paper. The present study focuses on shape morphing of flexible parabolic antenna actuated with Shape Memory Alloy (SMA) wires. The antenna is able to transmit the signals to the desired footprint on earth with a desired gain value. SMA wire based actuation with a locking device is developed for a precise control of Antenna shape. The locking device is efficient to hold the structure in deformed configuration during power cutoff from the system. The maximum controllable deflection at any point using such actuation system is about 25mm with a precision of ±100 m. In order to control the shape of the antenna in a closed feedback loop, a Proportional, Integral and Derivative (PID) based controller is developed using LabVIEW (NI) and experiments are performed. Numerical modeling and analysis of the structure is carried out using finite element software ABAQUS. For data reduction and fast computation, stiffness matrix generated by ABAQUS is condensed by Guyan Reduction technique and shape optimization is performed using Non-dominated Sorting Genetic Algorithm (NSGA-II). The matching in comparative study between numerical and experimental set-up shows efficacy of our method. Thereafter, Electro-Magnetic (EM) simulations of the deformed shape is carried out using electromagnetic field simulation, High Frequency Structure Simulator (HFSS). The proposed design is envisaged to be very effective for multipurpose application of satellite system in the future missions of Indian Space Research Organization (ISRO).

  15. An interval precise integration method for transient unbalance response analysis of rotor system with uncertainty

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun

    2018-07-01

    A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.

  16. Exponentially more precise quantum simulation of fermions in the configuration interaction representation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; Berry, Dominic W.; Sanders, Yuval R.; Kivlichan, Ian D.; Scherer, Artur; Wei, Annie Y.; Love, Peter J.; Aspuru-Guzik, Alán

    2018-01-01

    We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require \\widetilde{{ O }}(N) qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires \\widetilde{{ O }}(η ) qubits, where η \\ll N is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as \\widetilde{{ O }}({η }2{N}3t).

  17. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  18. COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less

  19. Computational simulation of weld microstructure and distortion by considering process mechanics

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.

    2009-05-01

    Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.

  20. Investigation of flow in data rack

    NASA Astrophysics Data System (ADS)

    Manoch, Lukáš; Nožička, Jiří; Pohan, Petr

    2012-04-01

    The main purpose of this paper was to set up a functioning numerical model of data rack verified by an experimental measurement. The verification of the numerical model was carried out by means of the PIV method (Particle Image Velocimetry). The numerical model was "found" while using the assumed and preset values from the experimental measurement which represent boundary conditions. The server model was conceived as a four-channel with a controlled flow rate without simulation of heat transfer. The flow rate in each channel was implemented by means of pressure loss. The numerical model was further used for simulation of several phases and configurations of data rack (21U rack space) fitted with two server workstations Dell Precision R5400. The flow field in the inlet of data rack in the front of the workstations were observed and evaluated in such a way that a 2U-dimensional free space between the workstations was being left and the remaining inlet space was blanked-off/fully opened. The results of this paper will serve for designing optimization treatment of data rack from the viewpoint of cooling efficiency both within the data rack and within the data center design.

  1. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    NASA Astrophysics Data System (ADS)

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  2. influx_s: increasing numerical stability and precision for metabolic flux analysis in isotope labelling experiments.

    PubMed

    Sokol, Serguei; Millard, Pierre; Portais, Jean-Charles

    2012-03-01

    The problem of stationary metabolic flux analysis based on isotope labelling experiments first appeared in the early 1950s and was basically solved in early 2000s. Several algorithms and software packages are available for this problem. However, the generic stochastic algorithms (simulated annealing or evolution algorithms) currently used in these software require a lot of time to achieve acceptable precision. For deterministic algorithms, a common drawback is the lack of convergence stability for ill-conditioned systems or when started from a random point. In this article, we present a new deterministic algorithm with significantly increased numerical stability and accuracy of flux estimation compared with commonly used algorithms. It requires relatively short CPU time (from several seconds to several minutes with a standard PC architecture) to estimate fluxes in the central carbon metabolism network of Escherichia coli. The software package influx_s implementing this algorithm is distributed under an OpenSource licence at http://metasys.insa-toulouse.fr/software/influx/. Supplementary data are available at Bioinformatics online.

  3. Modelling of creep hysteresis in ferroelectrics

    NASA Astrophysics Data System (ADS)

    He, Xuan; Wang, Dan; Wang, Linxiang; Melnik, Roderick

    2018-05-01

    In the current paper, a macroscopic model is proposed to simulate the hysteretic dynamics of ferroelectric ceramics with creep phenomenon incorporated. The creep phenomenon in the hysteretic dynamics is attributed to the rate-dependent characteristic of the polarisation switching processes induced in the materials. A non-convex Helmholtz free energy based on Landau theory is proposed to model the switching dynamics. The governing equation of single-crystal model is formulated by applying the Euler-Lagrange equation. The polycrystalline model is obtained by combining the single crystal dynamics with a density function which is constructed to model the weighted contributions of different grains with different principle axis orientations. In addition, numerical simulations of hysteretic dynamics with creep phenomenon are presented. Comparison of the numerical results and their experimental counterparts is also presented. It is shown that the creep phenomenon is captured precisely, validating the capability of the proposed model in a range of its potential applications.

  4. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  5. Underwater sympathetic detonation of pellet explosive

    NASA Astrophysics Data System (ADS)

    Kubota, Shiro; Saburi, Tei; Nagayama, Kunihito

    2017-06-01

    The underwater sympathetic detonation of pellet explosives was taken by high-speed photography. The diameter and the thickness of the pellet were 20 and 10 mm, respectively. The experimental system consists of the precise electric detonator, two grams of composition C4 booster and three pellets, and these were set in water tank. High-speed video camera, HPV-X made by Shimadzu was used with 10 Mfs. The underwater explosions of the precise electric detonator, the C4 booster and a pellet were also taken by high-speed photography to estimate the propagation processes of the underwater shock waves. Numerical simulation of the underwater sympathetic detonation of the pellet explosives was also carried out and compared with experiment.

  6. Merging LIDAR digital terrain model with direct observed elevation points for urban flood numerical simulation

    NASA Astrophysics Data System (ADS)

    Arrighi, Chiara; Campo, Lorenzo

    2017-04-01

    In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.

  7. Material flow data for numerical simulation of powder injection molding

    NASA Astrophysics Data System (ADS)

    Duretek, I.; Holzer, C.

    2017-01-01

    The powder injection molding (PIM) process is a cost efficient and important net-shape manufacturing process that is not completely understood. For the application of simulation programs for the powder injection molding process, apart from suitable physical models, exact material data and in particular knowledge of the flow behavior are essential in order to get precise numerical results. The flow processes of highly filled polymers are complex. Occurring effects are very hard to separate, like shear flow with yield stress, wall slip, elastic effects, etc. Furthermore, the occurrence of phase separation due to the multi-phase composition of compounds is quite probable. In this work, the flow behavior of a 316L stainless steel feedstock for powder injection molding was investigated. Additionally, the influence of pre-shearing on the flow behavior of PIM-feedstocks under practical conditions was examined and evaluated by a special PIM injection molding machine rheometer. In order to have a better understanding of key factors of PIM during the injection step, 3D non-isothermal numerical simulations were conducted with a commercial injection molding simulation software using experimental feedstock properties. The simulation results were compared with the experimental results. The mold filling studies amply illustrate the effect of mold temperature on the filling behavior during the mold filling stage. Moreover, the rheological measurements showed that at low shear rates no zero shear viscosity was observed, but instead the viscosity further increased strongly. This flow behavior could be described with the Cross-WLF approach with Herschel-Bulkley extension very well.

  8. Modelling runoff on ceramic tile roofs using the kinematic wave equations

    NASA Astrophysics Data System (ADS)

    Silveira, Alexandre; Abrantes, João; de Lima, João; Lira, Lincoln

    2016-04-01

    Rainwater harvesting is a water saving alternative strategy that presents many advantages and can provide solutions to address major water resources problems, such as fresh water scarcity, urban stream degradation and flooding. In recent years, these problems have become global challenges, due to climatic change, population growth and increasing urbanisation. Generally, roofs are the first to come into contact with rainwater; thus, they are the best candidates for rainwater harvesting. In this context, the correct evaluation of roof runoff quantity and quality is essential to effectively design rainwater harvesting systems. Despite this, many studies usually focus on the qualitative aspects in detriment of the quantitative aspects. Laboratory studies using rainfall simulators have been widely used to investigate rainfall-runoff processes. These studies enabled a detailed exploration and systematic replication of a large range of hydrologic conditions, such as rainfall spatial and temporal characteristics, providing for a fast way to obtain precise and consistent data that can be used to calibrate and validate numerical models. This study aims to evaluate the performance of a kinematic wave based numerical model in simulating runoff on sloping roofs, by comparing the numerical results with the ones obtained from laboratory rainfall simulations on a real-scale ceramic tile roof (Lusa tiles). For all studied slopes, simulated discharge hydrographs had a good adjust to observed ones. Coefficient of determination and Nash-Sutcliffe efficiency values were close to 1.0. Particularly, peak discharges, times to peak and peak durations were very well simulated.

  9. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  10. The development and application of CFD technology in mechanical engineering

    NASA Astrophysics Data System (ADS)

    Wei, Yufeng

    2017-12-01

    Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.

  11. Atmospheric turbulence and high-precision ground-based solar polarimetry

    NASA Astrophysics Data System (ADS)

    Nagaraju, K.; Feller, A.; Ihle, S.; Soltau, H.

    2011-10-01

    High-precision full-Stokes polarimetry at near diffraction limited spatial resolution is important to understand numerous physical processes on the Sun. In view of the next generation of ground based solar telescopes, we have explored, through numerical simulation, how polarimetric accuracy is affected by atmospheric seeing, especially in the case of large aperture telescopes with increasing ratio between mirror diameter and Fried parameter. In this work we focus on higher-order wavefront aberrations. The numerical generation of time-dependent turbulence phase screens is based on the well-known power spectral method and on the assumption that the temporal evolution is mainly caused by wind driven propagation of frozen-in turbulence across the telescope. To analyze the seeing induced cross-talk between the Stokes parameters we consider polarization modulation scheme based on a continuously rotating waveplate with rotation frequencies between 1 Hz and several 100 Hz. Further, we have started the development of a new fast solar imaging polarimeter, based on pnCCD detector technology from PNSensor. The first detector will have a size of 264 x 264 pixels and will work at frame rates of up to 1kHz, combined with a very low readout noise of 2-3 e- ENC. The camera readout electronics will allow for buffering and accumulation of images corresponding to the different phases of the fast polarization modulation. A high write-out rate (about 30 to 50 frames/s) will allow for post-facto image reconstruction. We will present the concept and the expected performance of the new polarimeter, based on the above-mentioned simulations of atmospheric seeing.

  12. Numerical simulation of the Earth satellites motion using parallel computing. accounting of weak disturbances. (Russian Title: Прогнозирование движения ИСЗ с использованием параллельных вычислений. учет слабых возмущений)

    NASA Astrophysics Data System (ADS)

    Chuvashov, I. N.

    2010-12-01

    The features of high-precision numerical simulation of the Earth satellite motion using parallel computing are discussed on example the implementation of the cluster "Skiff Cyberia" software complex "Numerical model of the motion of system satellites". It is shown that the use of 128 bit word length allows considering weak perturbations from the high-order harmonics in the expansion of the geopotential and the effect of strain geopotential harmonics arising due to the combination of tidal perturbations associated with exposure to the moon and sun on the solid Earth and its oceans.

  13. Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions

    NASA Astrophysics Data System (ADS)

    Barré, J.; Carrillo, J. A.; Degond, P.; Peurichard, D.; Zatorska, E.

    2018-02-01

    We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.

  14. A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less

  15. Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions.

    PubMed

    Barré, J; Carrillo, J A; Degond, P; Peurichard, D; Zatorska, E

    2018-01-01

    We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.

  16. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  17. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics

    PubMed Central

    2015-01-01

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125

  18. Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses

    PubMed Central

    Das, Jayajit

    2016-01-01

    Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894

  19. Invited article: Dielectric material characterization techniques and designs of high-Q resonators for applications from micro to millimeter-waves frequencies applicable at room and cryogenic temperatures.

    PubMed

    Le Floch, Jean-Michel; Fan, Y; Humbert, Georges; Shan, Qingxiao; Férachou, Denis; Bara-Maillet, Romain; Aubourg, Michel; Hartnett, John G; Madrangeas, Valerie; Cros, Dominique; Blondy, Jean-Marc; Krupka, Jerzy; Tobar, Michael E

    2014-03-01

    Dielectric resonators are key elements in many applications in micro to millimeter wave circuits, including ultra-narrow band filters and frequency-determining components for precision frequency synthesis. Distributed-layered and bulk low-loss crystalline and polycrystalline dielectric structures have become very important for building these devices. Proper design requires careful electromagnetic characterization of low-loss material properties. This includes exact simulation with precision numerical software and precise measurements of resonant modes. For example, we have developed the Whispering Gallery mode technique for microwave applications, which has now become the standard for characterizing low-loss structures. This paper will give some of the most common characterization techniques used in the micro to millimeter wave regime at room and cryogenic temperatures for designing high-Q dielectric loaded cavities.

  20. Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect

    NASA Astrophysics Data System (ADS)

    Chao, Chia-Chun George

    2009-03-01

    The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.

  1. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  2. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  3. Algorithms for radiative transfer simulations for aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko

    2012-11-01

    Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.

  4. Guiding-center equations for electrons in ultraintense laser fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J.E.; Fisch, N.J.

    1994-01-01

    The guiding-center equations are derived for electrons in arbitrarily intense laser fields also subject to external fields and ponderomotive forces. Exhibiting the relativistic mass increase of the oscillating electrons, a simple frame-invariant equation is shown to govern the behavior of the electrons for sufficiently weak background fields and ponderomotive forces. The parameter regime for which such a formulation is valid is made precise, and some predictions of the equation are checked by numerical simulation.

  5. The combustion program at CTR

    NASA Technical Reports Server (NTRS)

    Poinsot, Thierry J.

    1993-01-01

    Understanding and modeling of turbulent combustion are key problems in the computation of numerous practical systems. Because of the lack of analytical theories in this field and of the difficulty of performing precise experiments, direct numerical simulation (DNS) appears to be one of the most attractive tools to use in addressing this problem. The general objective of DNS of reacting flows is to improve our knowledge of turbulent combustion but also to use this information for turbulent combustion models. For the foreseeable future, numerical simulation of the full three-dimensional governing partial differential equations with variable density and transport properties as well as complex chemistry will remain intractable; thus, various levels of simplification will remain necessary. On one hand, the requirement to simplify is not necessarily a handicap: numerical simulations allow the researcher a degree of control in isolating specific physical phenomena that is inaccessible in experiments. CTR has pursued an intensive research program in the field of DNS for turbulent reacting flows since 1987. DNS of reacting flows is quite different from DNS of non-reacting flows: without reaction, the equations to solve are clearly the five conservation equations of the Navier Stokes system for compressible situations (four for incompressible cases), and the limitation of the approach is the Reynolds number (or in other words the number of points in the computation). For reacting flows, the choice of the equations, the species (each species will require one additional conservation equation), the chemical scheme, and the configuration itself is more complex.

  6. Experimental and numerical investigations of wire bending by linear winding of rectangular tooth coils

    NASA Astrophysics Data System (ADS)

    Komodromos, A.; Tekkaya, A. E.; Hofmann, J.; Fleischer, J.

    2018-05-01

    Since electric motors are gaining in importance in many fields of application, e.g. hybrid electric vehicles, optimization of the linear coil winding process greatly contributes to an increase in productivity and flexibility. For the investigation of the forming behavior of the winding wire the material behavior is characterized in different experimental setups. Numerical examinatons of the linear winding process are carried out in a case study for a rectangular bobbin in order to analyze the influence of forming parameters on the resulting properties of the wound coil. Besides the numerical investigation of the linear winding method by using the finite element method (FEM), a multi-body dynamics (MBD) simulation is carried out. The multi-body dynamics simulation is necessary to represent the movement of the bodies as well as the connection of the components during winding. The finite element method is used to represent the material behavior of the copper wire and the plastic strain distribution within the wire. It becomes clear that the MBD simulation is not sufficient for analyzing the process and the wire behavior in its entirety. Important parameters that define the final coil properties cannot be analyzed in the manner of a precise manifestation, e.g. the clearance between coil bobbin and wire as well as the wire deformation behavior in form of a diameter reduction which negatively affects the ohmic resistance. Finally, the numerical investigations are validated experimentally by linear winding tests.

  7. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  8. Wave transience in a compressible atmosphere. I - Transient internal wave, mean-flow interaction. II - Transient equatorial waves in the quasi-biennial oscillation

    NASA Technical Reports Server (NTRS)

    Dunkerton, T. J.

    1981-01-01

    Analytical and numerical solutions are obtained in an approximate quasi-linear model, to describe the way in which vertically propagating waves give rise to mean flow accelerations in an atmosphere due to the effects of wave transience. These effects in turn result from compressibility and vertical group velocity feedback, and culminate in the spontaneous formation and descent of regions of strong mean wind shear. The numerical solutions display mean flow accelerations due to Kelvin waves in the equatorial stratosphere, with wave absorption altering the transience mechanism in such significant respects as causing the upper atmospheric mean flow acceleration to be very sensitive to the precise magnitude and distribution of the damping mechanisms. The numerical simulations of transient equatorial waves in the quasi-biennial oscillation are also considered.

  9. Detecting vortices in superconductors: Extracting one-dimensional topological singularities from a discretized complex scalar field

    DOE PAGES

    Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...

    2015-02-20

    In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less

  10. Dynamic Simulation of a Wave Rotor Topped Turboshaft Engine

    NASA Technical Reports Server (NTRS)

    Greendyke, R. B.; Paxson, D. E.; Schobeiri, M. T.

    1997-01-01

    The dynamic behavior of a wave rotor topped turboshaft engine is examined using a numerical simulation. The simulation utilizes an explicit, one-dimensional, multi-passage, CFD based wave rotor code in combination with an implicit, one-dimensional, component level dynamic engine simulation code. Transient responses to rapid fuel flow rate changes and compressor inlet pressure changes are simulated and compared with those of a similarly sized, untopped, turboshaft engine. Results indicate that the wave rotor topped engine responds in a stable, and rapid manner. Furthermore, during certain transient operations, the wave rotor actually tends to enhance engine stability. In particular, there is no tendency toward surge in the compressor of the wave rotor topped engine during rapid acceleration. In fact, the compressor actually moves slightly away from the surge line during this transient. This behavior is precisely the opposite to that of an untopped engine. The simulation is described. Issues associated with integrating CFD and component level codes are discussed. Results from several transient simulations are presented and discussed.

  11. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  12. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  13. An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Barnes, Daniel C

    2012-01-01

    Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less

  14. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  15. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation.

    PubMed

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  16. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, A.; Acero, J.; Alberdi, B.

    High precision coil current control, stability and ripple content are very important aspects for a stellarator design. The TJ-II coils will be supplied by network commutated current converters and therefore the coil currents will contain harmonics which have to be kept to a very low level. An analytical investigation as well as numerous simulations with EMTP, SABER{reg_sign} and other softwares, have been done in order to predict the harmonic currents and to verify the completion with the specified maximum levels. The calculations and the results are presented.

  18. Exact analytic solutions of Maxwell's equations describing propagating nonparaxial electromagnetic beams.

    PubMed

    Garay-Avendaño, Roger L; Zamboni-Rached, Michel

    2014-07-10

    In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.

  19. Solar electric propulsion for terminal flight to rendezvous with comets and asteroids. [using guidance algorithm

    NASA Technical Reports Server (NTRS)

    Bennett, A.

    1973-01-01

    A guidance algorithm that provides precise rendezvous in the deterministic case while requiring only relative state information is developed. A navigation scheme employing only onboard relative measurements is built around a Kalman filter set in measurement coordinates. The overall guidance and navigation procedure is evaluated in the face of measurement errors by a detailed numerical simulation. Results indicate that onboard guidance and navigation for the terminal phase of rendezvous is possible with reasonable limits on measurement errors.

  20. Nonisothermal glass molding for the cost-efficient production of precision freeform optics

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Dambon, Olaf; Klocke, Fritz

    2016-07-01

    Glass molding has become a key replication-based technology to satisfy intensively growing demands of complex precision optics in the today's photonic market. However, the state-of-the-art replicative technologies are still limited, mainly due to their insufficiency to meet the requirements of mass production. This paper introduces a newly developed nonisothermal glass molding in which a complex-shaped optic is produced in a very short process cycle. The innovative molding technology promises a cost-efficient production because of increased mold lifetime, less energy consumption, and high throughput from a fast process chain. At the early stage of the process development, the research focuses on an integration of finite element simulation into the process chain to reduce time and labor-intensive cost. By virtue of numerical modeling, defects including chill ripples and glass sticking in the nonisothermal molding process can be predicted and the consequent effects are avoided. In addition, the influences of process parameters and glass preforms on the surface quality, form accuracy, and residual stress are discussed. A series of experiments was carried out to validate the simulation results. The successful modeling, therefore, provides a systematic strategy for glass preform design, mold compensation, and optimization of the process parameters. In conclusion, the integration of simulation into the entire nonisothermal glass molding process chain will significantly increase the manufacturing efficiency as well as reduce the time-to-market for the mass production of complex precision yet low-cost glass optics.

  1. Precision comparison of the power spectrum in the EFTofLSS with simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foreman, Simon; Senatore, Leonardo; Perrier, Hideki, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu, E-mail: hideki.perrier@unige.ch

    2016-05-01

    We study the prediction of the dark matter power spectrum at two-loop order in the Effective Field Theory of Large Scale Structures (EFTofLSS) using high precision numerical simulations. In our universe, short distance non-linear fluctuations, not under perturbative control, affect long distance fluctuations through an effective stress tensor that needs to be parametrized in terms of counterterms that are functions of the long distance fluctuating fields. We find that at two-loop order it is necessary to include three counterterms: a linear term in the overdensity, δ, a quadratic term, δ{sup 2}, and a higher derivative term, ∂{sup 2}δ. After themore » inclusion of these three terms, the EFTofLSS at two-loop order matches simulation data up to k ≅ 0.34 h Mpc{sup −1} at redshift z = 0, up to k ≅ 0.55 h Mpc{sup −1} at z = 1, and up to k ≅ 1.1 h Mpc{sup −1} at z = 2. At these wavenumbers, the cosmic variance of the simulation is at least as small as 10{sup −3}, providing for the first time a high precision comparison between theory and data. The actual reach of the theory is affected by theoretical uncertainties associated to not having included higher order terms in perturbation theory, for which we provide an estimate, and by potentially overfitting the data, which we also try to address. Since in the EFTofLSS the coupling constants associated with the counterterms are unknown functions of time, we show how a simple parametrization gives a sensible description of their time-dependence. Overall, the k -reach of the EFTofLSS is much larger than previous analytical techniques, showing that the amount of cosmological information amenable to high-precision analytical control might be much larger than previously believed.« less

  2. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  3. Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests

    NASA Astrophysics Data System (ADS)

    Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.

    2018-03-01

    Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.

  4. A new traffic model with a lane-changing viscosity term

    NASA Astrophysics Data System (ADS)

    Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng

    2015-09-01

    In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).

  5. Studies in premixed combustion. [Benjamin Levich Inst. for Physico-Chemical Hydrodynamics, City College of CUNY, New York, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivashinsky, G.I.

    1993-01-01

    During the period under review, significant progress was been made in studying the intrinsic dynamics of premixed flames and the problems of flame-flow interaction. (1) A weakly nonlinear model for Bunsen burner stabilized flames was proposed and employed for the simulation of three-dimensional polyhedral flames -- one of the most graphic manifestations of thermal-diffusive instability in premixed combustion. (2) A high-precision large-scale numerical simulation of Bunsen burner tip structure was conducted. The results obtained supported the earlier conjecture that the tip opening observed in low Lewis number systems is a purely optical effect not involving either flame extinction or leakagemore » of unburned fuel. (3) A one-dimensional model describing a reaction wave moving through a unidirectional periodic flow field is proposed and studied numerically. For long-wavelength fields the system exhibits a peculiar non-uniqueness of possible propagation regimes. The transition from one regime to another occurs in a manner of hysteresis.« less

  6. SAR and temperature distribution in the rat head model exposed to electromagnetic field radiation by 900 MHz dipole antenna.

    PubMed

    Yang, Lei; Hao, Dongmei; Wu, Shuicai; Zhong, Rugang; Zeng, Yanjun

    2013-06-01

    Rats are often used in the electromagnetic field (EMF) exposure experiments. In the study for the effect of 900 MHz EMF exposure on learning and memory in SD rats, the specific absorption rate (SAR) and the temperature rise in the rat head are numerically evaluated. The digital anatomical model of a SD rat is reconstructed with the MRI images. Numerical method as finite difference time domain has been applied to assess the SAR and the temperature rise during the exposure. Measurements and simulations are conducted to characterize the net radiated power of the dipole to provide a precise dosimetric result. The whole-body average SAR and the localized SAR averaging over 1, 0.5 and 0.05 g mass for different organs/tissues are given. It reveals that during the given exposure experiment setup, no significant temperature rise occurs. The reconstructed anatomical rat model could be used in the EMF simulation and the dosimetric result provides useful information for the biological effect studies.

  7. Toward Microscopic Equations of State for Core-Collapse Supernovae from Chiral Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Aboona, Bassam; Holt, Jeremy

    2017-09-01

    Chiral effective field theory provides a modern framework for understanding the structure and dynamics of nuclear many-body systems. Recent works have had much success in applying the theory to describe the ground- and excited-state properties of light and medium-mass atomic nuclei when combined with ab initio numerical techniques. Our aim is to extend the application of chiral effective field theory to describe the nuclear equation of state required for supercomputer simulations of core-collapse supernovae. Given the large range of densities, temperatures, and proton fractions probed during stellar core collapse, microscopic calculations of the equation of state require large computational resources on the order of one million CPU hours. We investigate the use of graphics processing units (GPUs) to significantly reduce the computational cost of these calculations, which will enable a more accurate and precise description of this important input to numerical astrophysical simulations. Cyclotron Institute at Texas A&M, NSF Grant: PHY 1659847, DOE Grant: DE-FG02-93ER40773.

  8. Computational and experimental investigation of free vibration and flutter of bridge decks

    NASA Astrophysics Data System (ADS)

    Helgedagsrud, Tore A.; Bazilevs, Yuri; Mathisen, Kjell M.; Øiseth, Ole A.

    2018-06-01

    A modified rigid-object formulation is developed, and employed as part of the fluid-object interaction modeling framework from Akkerman et al. (J Appl Mech 79(1):010905, 2012. https://doi.org/10.1115/1.4005072) to simulate free vibration and flutter of long-span bridges subjected to strong winds. To validate the numerical methodology, companion wind tunnel experiments have been conducted. The results show that the computational framework captures very precisely the aeroelastic behavior in terms of aerodynamic stiffness, damping and flutter characteristics. Considering its relative simplicity and accuracy, we conclude from our study that the proposed free-vibration simulation technique is a valuable tool in engineering design of long-span bridges.

  9. Preliminary Experimental Results for Charge Drag in a Simulated Low Earth Orbit Environment

    NASA Astrophysics Data System (ADS)

    Azema-Rovira, Monica

    Interest in the Low Earth Orbit (LEO) environment is growing in the science community as well as in the private sector. The number of spacecraft launched in these altitudes (150 - 700 km) keeps growing, and this region is accumulating space debris. In this scenario, the precise location of all LEO objects is a key factor to avoid catastrophic collisions and to safely perform station-keeping maneuvers. The detailed study of the atmospheric models in LEO can enhance the disturbances forces calculation of an orbiting object. Recent numerical studies indicate that one of the biggest non-conservative forces on a spacecraft is underestimated, the charge drag phenomenon. Validating these numerical models experimentally, will help to improve the numerical models for future spacecraft mission design. For this reason, the motivation of this thesis is to characterize a plasma source to later be used for charged drag measurements. The characterization has been done at the University of Colorado Colorado Springs in the Chamber for Atmospheric and Orbital Space Simulation. In the characterization process, a nano-Newton Thrust Stand has been characterized as a plasma diagnosis tool and compared with Langmuir Probe data.

  10. Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution

    NASA Astrophysics Data System (ADS)

    van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.

    2014-12-01

    Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.

  11. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  13. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  14. From LIDAR Scanning to 3d FEM Analysis for Complex Surface and Underground Excavations

    NASA Astrophysics Data System (ADS)

    Chun, K.; Kemeny, J.

    2017-12-01

    Light detection and ranging (LIDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease to use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of three-dimensional numerical model that can be used in FEM analysis. To date, however, straightforward techniques in reconstructing numerical model from the scanned data of underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating from LIDAR scanning to finite element numerical analysis, specifically converting LIDAR 3D point clouds of object containing complex surface geometry into finite element model. This methodology has been applied to the Kartchner Caverns in Arizona for the stability analysis. Numerical simulations were performed using the finite element code ABAQUS. The results indicate that the highlights of our technologies obtained from LIDAR is effective and provide reference for other similar engineering project in practice.

  15. Forest chimpanzees (Pan troglodytes verus) remember the location of numerous fruit trees.

    PubMed

    Normand, Emmanuelle; Ban, Simone Dagui; Boesch, Christophe

    2009-11-01

    It is assumed that spatial memory contributes crucially to animal cognition since animals' habitats entail a large number of dispersed and unpredictable food sources. Spatial memory has been investigated under controlled conditions, with different species showing and different conditions leading to varying performance levels. However, the number of food sources investigated is very low compared to what exists under natural conditions, where food resources are so abundant that it is difficult to precisely identify what is available. By using a detailed botanical map containing over 12,499 trees known to be used by the Taï chimpanzees, we created virtual maps of all productive fruit trees to simulate potential strategies used by wild chimpanzees to reach resources without spatial memory. First, we simulated different assumptions concerning the chimpanzees' preference for a particular tree species, and, second, we varied the detection field to control for the possible use of smell to detect fruiting trees. For all these assumptions, we compared simulated distance travelled, frequencies of trees visited, and revisit rates with what we actually observed in wild chimpanzees. Our results show that chimpanzees visit rare tree species more frequently, travel shorter distances to reach them, and revisit the same trees more often than if they had no spatial memory. In addition, we demonstrate that chimpanzees travel longer distances to reach resources where they will eat for longer periods of time, and revisit resources more frequently where they ate for a long period of time during their first visit. Therefore, this study shows that forest chimpanzees possess a precise spatial memory which allows them to remember the location of numerous resources and use this information to select the most attractive resources.

  16. A mechanically tunable and efficient ceramic probe for MR-microscopy at 17 Tesla

    NASA Astrophysics Data System (ADS)

    Kurdjumov, Sergei; Glybovski, Stanislav; Hurshkainen, Anna; Webb, Andrew; Abdeddaim, Redha; Ciobanu, Luisa; Melchakova, Irina; Belov, Pavel

    2017-09-01

    In this contribution we propose and study numerically a new probe (radiofrequency coil) for magnetic resonance mi-croscopy in the field of 17T. The probe is based on two coupled donut resonators made of a high-permittivity and low-loss ceramics excited by a non-resonant inductively coupled loop attached to a coaxial cable. By full-wave numerical simulation it was shown that the probe can be precisely tuned to the Larmor frequency of protons (723 MHz) by adjusting a gap between the two resonators. Moreover, the impedance of the probe can be matched by varying the distance from one of the resonators to the loop. As a result, a compact and mechanically tunable resonant probe was demonstrated for 17 Tesla applications using no lumped capacitors for tuning and matching. The new probe was numerically compared to a conventional solenoidal probe showing better efficiency.

  17. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  18. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  19. A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus

    NASA Astrophysics Data System (ADS)

    Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei

    2005-01-01

    Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.

  20. Design, simulation and evaluation of uniform magnetic field systems for head-free eye movement recordings with scleral search coils.

    PubMed

    Eibenberger, Karin; Eibenberger, Bernhard; Rucci, Michele

    2016-08-01

    The precise measurement of eye movements is important for investigating vision, oculomotor control and vestibular function. The magnetic scleral search coil technique is one of the most precise measurement techniques for recording eye movements with very high spatial (≈ 1 arcmin) and temporal (>kHz) resolution. The technique is based on measuring voltage induced in a search coil through a large magnetic field. This search coil is embedded in a contact lens worn by a human subject. The measured voltage is in direct relationship to the orientation of the eye in space. This requires a magnetic field with a high homogeneity in the center, since otherwise the field inhomogeneity would give the false impression of a rotation of the eye due to a translational movement of the head. To circumvent this problem, a bite bar typically restricts head movement to a minimum. However, the need often emerges to precisely record eye movements under natural viewing conditions. To this end, one needs a uniform magnetic field that is uniform over a large area. In this paper, we present the numerical and finite element simulations of the magnetic flux density of different coil geometries that could be used for search coil recordings. Based on the results, we built a 2.2 × 2.2 × 2.2 meter coil frame with a set of 3 × 4 coils to generate a 3D magnetic field and compared the measured flux density with our simulation results. In agreement with simulation results, the system yields a highly uniform field enabling high-resolution recordings of eye movements.

  1. Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses.

    PubMed

    Das, Jayajit

    2016-03-08

    Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Numerical study on 3D composite morphing actuators

    NASA Astrophysics Data System (ADS)

    Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru

    2015-04-01

    There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.

  3. A comparative study between two smoothing strategies for the simulation of contact with large sliding

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Magnain, Benoît; Chevaugeon, Nicolas

    2013-05-01

    The numerical simulation of contact problems is still a delicate matter especially when large transformations are involved. In that case, relative large slidings can occur between contact surfaces and the discretization error induced by usual finite elements may not be satisfactory. In particular, usual elements lead to a facetization of the contact surface, meaning an unavoidable discontinuity of the normal vector to this surface. Uncertainty over the precision of the results, irregularity of the displacement of the contact nodes and even numerical oscillations of contact reaction force may result of such discontinuity. Among the existing methods for tackling such issue, one may consider mortar elements (Fischer and Wriggers, Comput Methods Appl Mech Eng 195:5020-5036, 2006; McDevitt and Laursen, Int J Numer Methods Eng 48:1525-1547, 2000; Puso and Laursen, Comput Methods Appl Mech Eng 93:601-629, 2004), smoothing of the contact surfaces with additional geometrical entity (B-splines or NURBS) (Belytschko et al., Int J Numer Methods Eng 55:101-125, 2002; Kikuchi, Penalty/finite element approximations of a class of unilateral contact problems. Penalty method and finite element method, ASME, New York, 1982; Legrand, Modèles de prediction de l'interaction rotor/stator dans un moteur d'avion Thèse de doctorat. PhD thesis, École Centrale de Nantes, Nantes, 2005; Muñoz, Comput Methods Appl Mech Eng 197:979-993, 2008; Wriggers and Krstulovic-Opara, J Appl Math Mech (ZAMM) 80:77-80, 2000) and, the use of isogeometric analysis (Temizer et al., Comput Methods Appl Mech Eng 200:1100-1112, 2011; Hughes et al., Comput Methods Appl Mech Eng 194:4135-4195, 2005; de Lorenzis et al., Int J Numer Meth Eng, in press, 2011). In the present paper, we focus on these last two methods which are combined with a finite element code using the bi-potential method for contact management (Feng et al., Comput Mech 36:375-383, 2005). A comparative study focusing on the pros and cons of each method regarding geometrical precision and numerical stability for contact solution is proposed. The scope of this study is limited to 2D contact problems for which we consider several types of finite elements. Test cases are given in order to illustrate this comparative study.

  4. A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz

    1990-01-01

    A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.

  5. Development of variable-width ribbon heating elements for liquid-metal and gas-cooled fast breeder reactor fuel-pin simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCulloch, R.W.; Post, D.W.; Lovell, R.T.

    1981-04-01

    Variable-width ribbon heating elements that provide a chopped-cosine variable heat flux profile have been fabricated for fuel pin simulators used in test loops by the Breeder Reactor Program Thermal-Hydraulic Out-of-Reactor Safety test facility and the Gas-Cooled Fast Breeder Reactor-Core Flow Test Loop. Thermal, mechanical, and electrical design considerations are used to derive an analytical expression that precisely describes ribbon contour in terms of the major fabrication parameters. These parameters are used to generate numerical control tapes that control ribbon cutting and winding machines. Infrared scanning techniques are developed to determine the optimum transient thermal profile of the coils and relatemore » this profile to that generated by the coils in completed fuel pin simulators.« less

  6. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  7. Design and modeling of an efficiency horizontal thermal micro-actuator with integrated piezoresistors for precise control.

    PubMed

    Zhang, Yan; Lee, Dong-Weon

    2010-05-01

    An integrated system made up of a double-hot arm electro-thermal microactuator and a piezoresistor embedded at the base of the 'cold arm' is proposed. The electro-thermo-mechanical modeling and optimization is developed to elaborate the operation mechanism of the hybrid system through numerical simulations. For given materials, the geometry design mostly influences the performance of the sensor and actuator, which can be considered separately. That is because thermal expansion induced heating energy has less influence on the base area of the 'cold arm,' where is the maximum stress. The piezoresistor is positioned here for large sensitivity to monitor the in-plane movement of the system and characterize the actuator response precisely in real time. Force method is used to analyze the thermal induced mechanical expansion in the redundant structure. On the other hand, the integrated actuating mechanism is designed for high speed imaging. Based on the simulation results, the actuator operates at levels below 5 mA appearing to be very reliable, and the stress sensitivity is about 40 MPa per micron.

  8. Study on the flood simulation techniques for estimation of health risk in Dhaka city, Bangladesh

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Suetsugi, T.; Sunada, K.; ICRE

    2011-12-01

    Although some studies have been carried out on the spread of infectious disease with the flooding, the relation between flooding and the infectious expansion has not been clarified yet. The improvement of the calculation precision of inundation and its relation with the infectious disease, surveyed epidemiologically, are therefore investigated in a case study in Dhaka city, Bangladesh. The inundation was computed using a flood simulation model that is numerical 2D-model. The "sensitivity to inundation" of hydraulic factors such as drainage channel, dike, and the building occupied ratio was examined because of the lack of digital data set related to flood simulation. Each element in the flood simulation model was incorporated progressively and results were compared with the calculation result as inspection materials by the inundation classification from the existing study (Mollah et al., 2007). The results show that the influences by ''dyke'' and "drainage channel" factors are remarkable to water level near each facility. The inundation level and duration have influence on wide areas when "building occupied ratio" is also considered. The correlation between maximum inundation depth and health risk (DALY, Mortality, Morbidity) was found, but the validation of the inundation model for this case has not been performed yet. The flood simulation model needs to be validated by observed inundation depth. The drainage facilities such as sewer network or the pumping system will be also considered in the further research to improve the precision of the inundation model.

  9. LISA on Table: an optical simulator for LISA

    NASA Astrophysics Data System (ADS)

    Halloin, H.; Jeannin, O.; Argence, B.; Bourrier, V.; de Vismes, E.; Prat, P.

    2017-11-01

    LISA, the first space project for detecting gravitational waves, relies on two main technical challenges: the free falling masses and an outstanding precision on phase shift measurements (a few pm on 5 Mkm in the LISA band). The technology of the free falling masses, i.e. their isolation to forces other than gravity and the capability for the spacecraft to precisely follow the test masses, will soon be tested with the technological LISA Pathfinder mission. The performance of the phase measurement will be achieved by at least two stabilization stages: a pre-stabilisation of the laser frequency at a level of 10-13 (relative frequency stability) will be further improved by using numerical algorithms, such as Time Delay Interferometry, which have been theoretically and numerically demonstrated to reach the required performance level (10-21). Nevertheless, these algorithms, though already tested with numerical model of LISA, require experimental validation, including `realistic' hardware elements. Such an experiment would allow to evaluate the expected noise level and the possible interactions between subsystems. To this end, the APC is currently developing an optical benchtop experiment, called LISA On Table (LOT), which is representative of the three LISA spacecraft. A first module of the LOT experiment has been mounted and is being characterized. After completion this facility may be used by the LISA community to test hardware (photodiodes, phasemeters) or software (reconstruction algorithms) components.

  10. Development of 3-axis precise positioning seismic physical modeling system in the simulation of marine seismic exploration

    NASA Astrophysics Data System (ADS)

    Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.

    2017-12-01

    Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and multichannel explorations were carried out in the marine environment and the accuracy of the modeling system was verified by comparatively analyzing the exploration data and the numerical modeling data acquired.

  11. Development of a novel three-dimensional deformable mirror with removable influence functions for high precision wavefront correction in adaptive optics system

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi

    2016-07-01

    Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.

  12. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  13. Cymatics for the cloaking of flexural vibrations in a structured plate

    PubMed Central

    Misseroni, D.; Colquitt, D. J.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.

    2016-01-01

    Based on rigorous theoretical findings, we present a proof-of-concept design for a structured square cloak enclosing a void in an elastic lattice. We implement high-precision fabrication and experimental testing of an elastic invisibility cloak for flexural waves in a mechanical lattice. This is accompanied by verifications and numerical modelling performed through finite element simulations. The primary advantage of our square lattice cloak, over other designs, is the straightforward implementation and the ease of construction. The elastic lattice cloak, implemented experimentally, shows high efficiency. PMID:27068339

  14. Spontaneous oscillations in microfluidic networks

    NASA Astrophysics Data System (ADS)

    Case, Daniel; Angilella, Jean-Regis; Motter, Adilson

    2017-11-01

    Precisely controlling flows within microfluidic systems is often difficult which typically results in systems being heavily reliant on numerous external pumps and computers. Here, I present a simple microfluidic network that exhibits flow rate switching, bistablity, and spontaneous oscillations controlled by a single pressure. That is, by solely changing the driving pressure, it is possible to switch between an oscillating and steady flow state. Such functionality does not rely on external hardware and may even serve as an on-chip memory or timing mechanism. I use an analytic model and rigorous fluid dynamics simulations to show these results.

  15. Feasibility of graphene CRLH metamaterial waveguides and leaky wave antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Derrick A.; Itoh, Tatsuo; Hon, Philip W. C.

    2016-07-07

    The feasibility of composite right/left-handed (CRLH) metamaterial waveguides based upon graphene plasmons is demonstrated via numerical simulation. Designs are presented that operate in the terahertz frequency range along with their various dimensions. Dispersion relations, radiative and free-carrier losses, and free-carrier based tunability are characterized. Finally, the radiative characteristics are evaluated, along with its feasibility for use as a leaky-wave antenna. While CRLH waveguides are feasible in the terahertz range, their ultimate utility will require precise nanofabrication, and excellent quality graphene to mitigate free-carrier losses.

  16. Explosion localization via infrasound.

    PubMed

    Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M

    2009-11-01

    Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.

  17. Precisely cyclic sand: self-organization of periodically sheared frictional grains.

    PubMed

    Royer, John R; Chaikin, Paul M

    2015-01-06

    The disordered static structure and chaotic dynamics of frictional granular matter has occupied scientists for centuries, yet there are few organizational principles or guiding rules for this highly hysteretic, dissipative material. We show that cyclic shear of a granular material leads to dynamic self-organization into several phases with different spatial and temporal order. Using numerical simulations, we present a phase diagram in strain-friction space that shows chaotic dispersion, crystal formation, vortex patterns, and most unusually a disordered phase in which each particle precisely retraces its unique path. However, the system is not reversible. Rather, the trajectory of each particle, and the entire frictional, many-degrees-of-freedom system, organizes itself into a limit cycle absorbing state. Of particular note is that fact that the cyclic states are spatially disordered, whereas the ordered states are chaotic.

  18. Precisely cyclic sand: Self-organization of periodically sheared frictional grains

    PubMed Central

    Royer, John R.; Chaikin, Paul M.

    2015-01-01

    The disordered static structure and chaotic dynamics of frictional granular matter has occupied scientists for centuries, yet there are few organizational principles or guiding rules for this highly hysteretic, dissipative material. We show that cyclic shear of a granular material leads to dynamic self-organization into several phases with different spatial and temporal order. Using numerical simulations, we present a phase diagram in strain–friction space that shows chaotic dispersion, crystal formation, vortex patterns, and most unusually a disordered phase in which each particle precisely retraces its unique path. However, the system is not reversible. Rather, the trajectory of each particle, and the entire frictional, many–degrees-of-freedom system, organizes itself into a limit cycle absorbing state. Of particular note is that fact that the cyclic states are spatially disordered, whereas the ordered states are chaotic. PMID:25538298

  19. Leptonic-decay-constant ratio f(K+)/f(π+) from lattice QCD with physical light quarks.

    PubMed

    Bazavov, A; Bernard, C; DeTar, C; Foley, J; Freeman, W; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Laiho, J; Levkova, L; Lightman, M; Osborn, J; Qiu, S; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R

    2013-04-26

    A calculation of the ratio of leptonic decay constants f(K+)/f(π+) makes possible a precise determination of the ratio of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |V(us)|/|V(ud)| in the standard model, and places a stringent constraint on the scale of new physics that would lead to deviations from unitarity in the first row of the CKM matrix. We compute f(K+)/f(π+) numerically in unquenched lattice QCD using gauge-field ensembles recently generated that include four flavors of dynamical quarks: up, down, strange, and charm. We analyze data at four lattice spacings a ≈ 0.06, 0.09, 0.12, and 0.15 fm with simulated pion masses down to the physical value 135 MeV. We obtain f(K+)/f(π+) = 1.1947(26)(37), where the errors are statistical and total systematic, respectively. This is our first physics result from our N(f) = 2+1+1 ensembles, and the first calculation of f(K+)/f(π+) from lattice-QCD simulations at the physical point. Our result is the most precise lattice-QCD determination of f(K+)/f(π+), with an error comparable to the current world average. When combined with experimental measurements of the leptonic branching fractions, it leads to a precise determination of |V(us)|/|V(ud)| = 0.2309(9)(4) where the errors are theoretical and experimental, respectively.

  20. Fuel-Optimal Altitude Maintenance of Low-Earth-Orbit Spacecrafts by Combined Direct/Indirect Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young

    2015-12-01

    This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.

  1. Numerical and experimental approaches to study soil transport and clogging in granular filters

    NASA Astrophysics Data System (ADS)

    Kanarska, Y.; Smith, J. J.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.

    2012-12-01

    Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. Numerical modeling has proved to be a cost-effective tool for improving our understanding of physical processes. Traditionally, the consideration of flow and particle transport in porous media has focused on treating the media as continuum. Practical models typically address flow and transport based on the Darcy's law as a function of a pressure gradient and a medium-dependent permeability parameter. Additional macroscopic constitutes describe porosity, and permeability changes during the migration of a suspension through porous media. However, most of them rely on empirical correlations, which often need to be recalibrated for each application. Grain-scale modeling can be used to gain insight into scale dependence of continuum macroscale parameters. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration in the filter layers of gravity dam. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. It is believed that the agreement between simulations and experimental data demonstrates the applicability of the proposed approach for prediction of the soil transport and clogging in embankment dams. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The Department of Homeland Security Science and Technology Directorate provided funding for this research.

  2. Multi-hump potentials for efficient wave absorption in the numerical solution of the time-dependent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Silaev, A. A.; Romanov, A. A.; Vvedenskii, N. V.

    2018-03-01

    In the numerical solution of the time-dependent Schrödinger equation by grid methods, an important problem is the reflection and wrap-around of the wave packets at the grid boundaries. Non-optimal absorption of the wave function leads to possible large artifacts in the results of numerical simulations. We propose a new method for the construction of the complex absorbing potentials for wave suppression at the grid boundaries. The method is based on the use of the multi-hump imaginary potential which contains a sequence of smooth and symmetric humps whose widths and amplitudes are optimized for wave absorption in different spectral intervals. We show that this can ensure a high efficiency of absorption in a wide range of de Broglie wavelengths, which includes wavelengths comparable to the width of the absorbing layer. Therefore, this method can be used for high-precision simulations of various phenomena where strong spreading of the wave function takes place, including the phenomena accompanying the interaction of strong fields with atoms and molecules. The efficiency of the proposed method is demonstrated in the calculation of the spectrum of high-order harmonics generated during the interaction of hydrogen atoms with an intense infrared laser pulse.

  3. Development of the functional simulator for the Galileo attitude and articulation control system

    NASA Technical Reports Server (NTRS)

    Namiri, M. K.

    1983-01-01

    A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.

  4. Precision of the anchor influences the amount of adjustment.

    PubMed

    Janiszewski, Chris; Uy, Dan

    2008-02-01

    The anchoring-and-adjustment heuristic has been used to account for a wide variety of numerical judgments. Five studies show that adjustment away from a numerical anchor is smaller if the anchor is precise than if it is rounded. Evidence suggests that precise anchors, compared with rounded anchors, are represented on a subjective scale with a finer resolution. If adjustment consists of a series of iterative mental movements along a subjective scale, then an adjustment from a precise anchor should result in a smaller overall correction than an adjustment from a rounded anchor.

  5. Gamma-ray spectroscopy measurements and simulations for uranium mining

    NASA Astrophysics Data System (ADS)

    Marchais, T.; Pérot, B.; Carasco, C.; Allinei, P.-G.; Chaussonnet, P.; Ma, J.-L.; Toubon, H.

    2018-01-01

    AREVA Mines and the Nuclear Measurement Laboratory of CEA Cadarache are collaborating to improve the sensitivity and precision of uranium concentration evaluation by means of gamma measurements. This paper reports gamma-ray spectra, recorded with a high-purity coaxial germanium detector, on standard cement blocks with increasing uranium content, and the corresponding MCNP simulations. The detailed MCNP model of the detector and experimental setup has been validated by calculation vs. experiment comparisons. An optimization of the detector MCNP model is presented in this paper, as well as a comparison of different nuclear data libraries to explain missing or exceeding peaks in the simulation. Energy shifts observed between the fluorescence X-rays produced by MCNP and atomic data are also investigated. The qualified numerical model will be used in further studies to develop new gamma spectroscopy approaches aiming at reducing acquisition times, especially for ore samples with low uranium content.

  6. Performance Simulation & Engineering Analysis/Design and Verification of a Shock Mitigation System for a Rover Landing on Mars

    NASA Astrophysics Data System (ADS)

    Ullio, Roberto; Gily, Alessandro; Jones, Howard; Geelen, Kelly; Larranaga, Jonan

    2014-06-01

    In the frame of the ESA Mars Robotic Exploration Preparation (MREP) programme and within its Technology Development Plan [1] the activity "E913- 007MM Shock Mitigation Operating Only at Touch- down by use of minimalist/dispensable Hardware" (SMOOTH) was conducted under the framework of Rover technologies and to support the ESA MREP Mars Precision Lander (MPL) Phase A system study with the objectives to:• study the behaviour of the Sample Fetching Rover (SFR) landing on Mars on its wheels• investigate and implement into the design of the SFR Locomotion Sub-System (LSS) an impact energy absorption system (SMOOTH)• verify by simulation the performances of SMOOTH The main purpose of this paper is to present the obtained numerical simulation results and to explain how these results have been utilized first to iterate on the design of the SMOOTH concept and then to validate its performances.

  7. Improving mixing efficiency of a polymer micromixer by use of a plastic shim divider

    NASA Astrophysics Data System (ADS)

    Li, Lei; Lee, L. James; Castro, Jose M.; Yi, Allen Y.

    2010-03-01

    In this paper, a critical modification to a polymer based affordable split-and-recombination static micromixer is described. To evaluate the improvement, both the original and the modified design were carefully investigated using an experimental setup and numerical modeling approach. The structure of the micromixer was designed to take advantage of the process capabilities of both ultraprecision micromachining and microinjection molding process. Specifically, the original and the modified design were numerically simulated using commercial finite element method software ANSYS CFX to assist the re-designing of the micromixers. The simulation results have shown that both designs are capable of performing mixing while the modified design has a much improved performance. Mixing experiments with two different fluids were carried out using the original and the modified mixers again showed a significantly improved mixing uniformity by the latter. The measured mixing coefficient for the original design was 0.11, and for the improved design it was 0.065. The developed manufacturing process based on ultraprecision machining and microinjection molding processes for device fabrication has the advantage of high-dimensional precision, low cost and manufacturing flexibility.

  8. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  9. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  10. Shock interaction with deformable particles using a constrained interface reinitialization scheme

    NASA Astrophysics Data System (ADS)

    Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.

    2016-02-01

    In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.

  11. On the Minimal Accuracy Required for Simulating Self-gravitating Systems by Means of Direct N-body Methods

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-01

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  12. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE PAGES

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...

    2018-02-26

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  13. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  14. Experimental and Numerical Simulations of Phase Transformations Occurring During Continuous Annealing of DP Steel Strips

    NASA Astrophysics Data System (ADS)

    Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej

    2016-04-01

    Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.

  15. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Large eddy simulation of shock train in a convergent-divergent nozzle

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Mahmood; Roohi, Ehsan

    2014-12-01

    This paper discusses the suitability of the Large Eddy Simulation (LES) turbulence modeling for the accurate simulation of the shock train phenomena in a convergent-divergent nozzle. To this aim, we selected an experimentally tested geometry and performed LES simulation for the same geometry. The structure and pressure recovery inside the shock train in the nozzle captured by LES model are compared with the experimental data, analytical expressions and numerical solutions obtained using various alternative turbulence models, including k-ɛ RNG, k-ω SST, and Reynolds stress model (RSM). Comparing with the experimental data, we observed that the LES solution not only predicts the "locations of the first shock" precisely, but also its results are quite accurate before and after the shock train. After validating the LES solution, we investigate the effects of the inlet total pressure on the shock train starting point and length. The effects of changes in the back pressure, nozzle inlet angle (NIA) and wall temperature on the behavior of the shock train are investigated by details.

  17. The instanton method and its numerical implementation in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Grafke, Tobias; Grauer, Rainer; Schäfer, Tobias

    2015-08-01

    A precise characterization of structures occurring in turbulent fluid flows at high Reynolds numbers is one of the last open problems of classical physics. In this review we discuss recent developments related to the application of instanton methods to turbulence. Instantons are saddle point configurations of the underlying path integrals. They are equivalent to minimizers of the related Freidlin-Wentzell action and known to be able to characterize rare events in such systems. While there is an impressive body of work concerning their analytical description, this review focuses on the question on how to compute these minimizers numerically. In a short introduction we present the relevant mathematical and physical background before we discuss the stochastic Burgers equation in detail. We present algorithms to compute instantons numerically by an efficient solution of the corresponding Euler-Lagrange equations. A second focus is the discussion of a recently developed numerical filtering technique that allows to extract instantons from direct numerical simulations. In the following we present modifications of the algorithms to make them efficient when applied to two- or three-dimensional (2D or 3D) fluid dynamical problems. We illustrate these ideas using the 2D Burgers equation and the 3D Navier-Stokes equations.

  18. GOCE gravity field simulation based on actual mission scenario

    NASA Astrophysics Data System (ADS)

    Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.

    2009-04-01

    In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.

  19. Slow transition of the Osborne Reynolds pipe flow: A direct numerical simulation study.

    NASA Astrophysics Data System (ADS)

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.; Baltzer, Jon R.

    2015-11-01

    Osborne Reynolds' pipe transition experiment marked the onset of fundamental turbulence research, yet the precise dynamics carrying the laminar state to fully-developed turbulence has been quite elusive. Our spatially-developing direct numerical simulation of this problem reveals interesting connections with theory and experiments. In particular, during transition the energy norms of localized, weakly finite inlet perturbations grow exponentially, rather than algebraically, with axial distance, in agreement with the edge-state based temporal results of Schneider et al. (PRL, 034502, 2007). When inlet disturbance is the core region, helical vortex filaments evolve into large-scale reverse hairpin vortices. The interaction of these reverse hairpins among themselves or with the near-wall flow produces small-scale hairpin packets. When inlet disturbance is near the wall, optimally positioned quasi-spanwise structure is stretched into a Lambda vortex, which grows into a turbulent spot of concentrated small-scale hairpin vortices. Waves of hairpin-like structures were observed by Mullin (Ann. Rev. Fluid Mech., Vol.43, 2011) in their experiment with very weak blowing and suction. This vortex dynamics is broadly analogous to that in the boundary layer bypass transition and in the secondary instability and breakdown stage of natural transition. Further details of our simulation are reported in Wu et al. (PNAS, 1509451112, 2015).

  20. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  1. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  2. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  3. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  4. 15 CFR 711.5 - Numerical precision of submitted data.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Numerical precision of submitted data. 711.5 Section 711.5 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS...

  5. A Stochastic Model of Eye Lens Growth

    PubMed Central

    Šikić, Hrvoje; Shi, Yanrong; Lubura, Snježana; Bassnett, Steven

    2015-01-01

    The size and shape of the ocular lens must be controlled with precision if light is to be focused sharply on the retina. The lifelong growth of the lens depends on the production of cells in the anterior epithelium. At the lens equator, epithelial cells differentiate into fiber cells, which are added to the surface of the existing fiber cell mass, increasing its volume and area. We developed a stochastic model relating the rates of cell proliferation and death in various regions of the lens epithelium to deposition of fiber cells and lens growth. Epithelial population dynamics were modeled as a branching process with emigration and immigration between various proliferative zones. Numerical simulations were in agreement with empirical measurements and demonstrated that, operating within the strict confines of lens geometry, a stochastic growth engine can produce the smooth and precise growth necessary for lens function. PMID:25816743

  6. Technology of focus detection for 193nm projection lithographic tool

    NASA Astrophysics Data System (ADS)

    Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong

    2012-10-01

    With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.

  7. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.

  8. Determination of full piezoelectric complex parameters using gradient-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.

    2016-02-01

    At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.

  9. Parallel high-precision orbit propagation using the modified Picard-Chebyshev method

    NASA Astrophysics Data System (ADS)

    Koblick, Darin C.

    2012-03-01

    The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12th order Runge-Kutta-Nystrom method. For this study, the evaluation of the accuracy and computational time of the modified Picard-Chebyshev method, using the Java Astrodynamics Toolkit high-precision force model, is conducted to assess its runtime performance. Simulation results of the modified Picard-Chebyshev method, implemented in MATLAB and the MATLAB Parallel Computing Toolbox, are compared against the most efficient first and second order Ordinary Differential Equation (ODE) solvers. A total of six processors were used to assess the runtime performance of the modified Picard-Chebyshev method. It was found that for all orbit propagation test cases, where the gravity model was simulated to be of higher degree and order (above 225 to increase computational overhead), the modified Picard-Chebyshev method was faster, by as much as a factor of two, than the other ODE solvers which were tested.

  10. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit from using SEOBNRv2 waveform templates when focused on neutron star-black hole systems with q ≲7 and χBH≈[-0.9 ,+0.6 ] . For larger black hole spins and/or binary mass ratios, we recommend the models be further investigated as NR simulations in that region of the parameter space become available.

  11. Numerical and experimental analyses of lighting columns in terms of passive safety

    NASA Astrophysics Data System (ADS)

    Jedliński, Tomasz Ireneusz; Buśkiewicz, Jacek

    2018-01-01

    Modern lighting columns have a very beneficial influence on road safety. Currently, the columns are being designed to keep the driver safe in the event of a car collision. The following work compares experimental results of vehicle impact on a lighting column with FEM simulations performed using the Ansys LS-DYNA program. Due to high costs of experiments and time-consuming research process, the computer software seems to be very useful utility in the development of pole structures, which are to absorb kinetic energy of the vehicle in a precisely prescribed way.

  12. Parameter-tolerant design of high contrast gratings

    NASA Astrophysics Data System (ADS)

    Chevallier, Christyves; Fressengeas, Nicolas; Jacquet, Joel; Almuneau, Guilhem; Laaroussi, Youness; Gauthier-Lafaye, Olivier; Cerutti, Laurent; Genty, Frédéric

    2015-02-01

    This work is devoted to the design of high contrast grating mirrors taking into account the technological constraints and tolerance of fabrication. First, a global optimization algorithm has been combined to a numerical analysis of grating structures (RCWA) to automatically design HCG mirrors. Then, the tolerances of the grating dimensions have been precisely studied to develop a robust optimization algorithm with which high contrast gratings, exhibiting not only a high efficiency but also large tolerance values, could be designed. Finally, several structures integrating previously designed HCGs has been simulated to validate and illustrate the interest of such gratings.

  13. Robust rotation of rotor in a thermally driven nanomotor

    PubMed Central

    Cai, Kun; Yu, Jingzhou; Shi, Jiao; Qin, Qing-Hua

    2017-01-01

    In the fabrication of a thermally driven rotary nanomotor with the dimension of a few nanometers, fabrication and control precision may have great influence on rotor’s stability of rotational frequency (SRF). To investigate effects of uncertainty of some major factors including temperature, tube length, axial distance between tubes, diameter of tubes and the inward radial deviation (IRD) of atoms in stators on the frequency’s stability, theoretical analysis integrating with numerical experiments are carried out. From the results obtained via molecular dynamics simulation, some key points are illustrated for future fabrication of the thermal driven rotary nanomotor. PMID:28393898

  14. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  15. On the construction of a ground truth framework for evaluating voxel-based diffusion tensor MRI analysis methods.

    PubMed

    Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander

    2009-07-01

    Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).

  16. Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis

    NASA Astrophysics Data System (ADS)

    Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro

    2017-04-01

    The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191

  17. On continuous and discontinuous approaches for modeling groundwater flow in heterogeneous media using the Numerical Manifold Method: Model development and comparison

    NASA Astrophysics Data System (ADS)

    Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny

    2015-06-01

    One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.

  18. Application of Numerical Integration and Data Fusion in Unit Vector Method

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.

  19. Numerical simulation of transonic compressor under circumferential inlet distortion and rotor/stator interference using harmonic balance method

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min

    2018-05-01

    Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.

  20. Space-borne profiling of atmospheric thermodynamic variables with Raman lidar: performance simulations.

    PubMed

    Di Girolamo, Paolo; Behrendt, Andreas; Wulfmeyer, Volker

    2018-04-02

    The performance of a space-borne water vapour and temperature lidar exploiting the vibrational and pure rotational Raman techniques in the ultraviolet is simulated. This paper discusses simulations under a variety of environmental and climate scenarios. Simulations demonstrate the capability of Raman lidars deployed on-board low-Earth-orbit satellites to provide global-scale water vapour mixing ratio and temperature measurements in the lower to middle troposphere, with accuracies exceeding most observational requirements for numerical weather prediction (NWP) and climate research applications. These performances are especially attractive for measurements in the low troposphere in order to close the most critical gaps in the current earth observation system. In all climate zones, considering vertical and horizontal resolutions of 200 m and 50 km, respectively, mean water vapour mixing ratio profiling precision from the surface up to an altitude of 4 km is simulated to be 10%, while temperature profiling precision is simulated to be 0.40-0.75 K in the altitude interval up to 15 km. Performances in the presence of clouds are also simulated. Measurements are found to be possible above and below cirrus clouds with an optical thickness of 0.3. This combination of accuracy and vertical resolution cannot be achieved with any other space borne remote sensing technique and will provide a breakthrough in our knowledge of global and regional water and energy cycles, as well as in the quality of short- to medium-range weather forecasts. Besides providing a comprehensive set of simulations, this paper also provides an insight into specific possible technological solutions that are proposed for the implementation of a space-borne Raman lidar system. These solutions refer to technological breakthroughs gained during the last decade in the design and development of specific lidar devices and sub-systems, primarily in high-power, high-efficiency solid-state laser sources, low-weight large aperture telescopes, and high-gain, high-quantum efficiency detectors.

  1. Recent progress on air-bearing slumping of segmented thin-shell mirrors for x-ray telescopes: experiments and numerical analysis

    NASA Astrophysics Data System (ADS)

    Zuo, Heng E.; Yao, Youwei; Chalifoux, Brandon D.; DeTienne, Michael D.; Heilmann, Ralf K.; Schattenburg, Mark L.

    2017-08-01

    Slumping (or thermal-shaping) of thin glass sheets onto high precision mandrels was used successfully by NASA Goddard Space Flight Center to fabricate the NuSTAR telescope. But this process requires long thermal cycles and produces mid-range spatial frequency errors due to the anti-stick mandrel coatings. Over the last few years, we have designed and tested non-contact horizontal slumping of round flat glass sheets floating on thin layers of nitrogen between porous air-bearings using fast position control algorithms and precise fiber sensing techniques during short thermal cycles. We recently built a finite element model with ADINA to simulate the viscoelastic behavior of glass during the slumping process. The model utilizes fluid-structure interaction (FSI) to understand the deformation and motion of glass under the influence of air flow. We showed that for the 2D axisymmetric model, experimental and numerical approaches have comparable results. We also investigated the impact of bearing permeability on the resulting shape of the wafers. A novel vertical slumping set-up is also under development to eliminate the undesirable influence of gravity. Progress towards generating mirrors for good angular resolution and low mid-range spatial frequency errors is reported.

  2. Propagating synchrony in feed-forward networks

    PubMed Central

    Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc

    2013-01-01

    Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251

  3. EFT of large scale structures in redshift space

    NASA Astrophysics Data System (ADS)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  4. E-GRASP/Eratosthenes: GRGS numerical simulations and millimetric TRF realization

    NASA Astrophysics Data System (ADS)

    Pollet, A.; Coulot, D.; Biancale, R.; Mandea, M.

    2017-12-01

    To accurately measuring and understanding changes in sea level, ice sheets and other elements of the dynamic Earth system, a stable Terrestrial Reference Frame (TRF) is needed. To reach the goals for the TRF realization of 1 mm accuracy and 0.1 mm/year stability (GGOS, Meeting the Requirements of a Global Society on a Changing Planet in 2020, Plag and Pearlman, 2009), The European - Geodetic Reference Antenna in Space (E-GRASP) has been recently proposed to the ESA EE9 call. This space mission is designed to build an enduring and stable TRF, by carrying very precise sensor systems for all the key geodetic techniques used to define and monitor the TRF (DORIS, GNSS, SLR and VLBI).In this study, we present the numerical simulations carried out by the French Groupe de Recherche en Géodésie Spatiale (GRGS). We simulated the measurements of the four geodetic techniques (DORIS and SLR measurements to E-GRASP, VLBI interferometric measurements on E-GRASP and GPS measurements from ground stations and from E-GRASP) over five years. Next, we have evaluated the expected exactitude and stability of the TRF provided by the processing of these measurements. In addition, we show the expected impact of the on-board instrument calibration on the TRF. Finally, we simulated the measurements of the two LAGEOS and four DORIS satellites, quasars for VLBI and we computed two multi-technique combinations, one with E-GRASP measurements and one without, to evaluate the contribution of this satellite to a combination.

  5. A Numerical Method for the Simulation of Skew Brownian Motion and its Application to Diffusive Shock Acceleration of Charged Particles

    NASA Astrophysics Data System (ADS)

    McEvoy, Erica L.

    Stochastic differential equations are becoming a popular tool for modeling the transport and acceleration of cosmic rays in the heliosphere. In diffusive shock acceleration, cosmic rays diffuse across a region of discontinuity where the up- stream diffusion coefficient abruptly changes to the downstream value. Because the method of stochastic integration has not yet been developed to handle these types of discontinuities, I utilize methods and ideas from probability theory to develop a conceptual framework for the treatment of such discontinuities. Using this framework, I then produce some simple numerical algorithms that allow one to incorporate and simulate a variety of discontinuities (or boundary conditions) using stochastic integration. These algorithms were then modified to create a new algorithm which incorporates the discontinuous change in diffusion coefficient found in shock acceleration (known as Skew Brownian Motion). The originality of this algorithm lies in the fact that it is the first of its kind to be statistically exact, so that one obtains accuracy without the use of approximations (other than the machine precision error). I then apply this algorithm to model the problem of diffusive shock acceleration, modifying it to incorporate the additional effect of the discontinuous flow speed profile found at the shock. A steady-state solution is obtained that accurately simulates this phenomenon. This result represents a significant improvement over previous approximation algorithms, and will be useful for the simulation of discontinuous diffusion processes in other fields, such as biology and finance.

  6. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  7. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  8. Rapid inundation estimates at harbor scale using tsunami wave heights offshore simulation and Green's law approach

    NASA Astrophysics Data System (ADS)

    Gailler, Audrey; Hébert, Hélène; Loevenbruck, Anne

    2013-04-01

    Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response on the scale of an individual harbor. In fact, when facing the problem of the interaction of the tsunami wavefield with a shoreline, any numerical simulation must be performed over an increasingly fine grid, which in turn mandates a reduced time step, and the use of a fully non-linear code. Such calculations become then prohibitively time-consuming, which is clearly unacceptable in the framework of real-time warning. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami wave heights in high seas, and tsunami warning maps at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these deep wave heights simulations. The method involves an empirical correction relation derived from Green's law, expressing conservation of wave energy flux to extend the gridded wave field into the harbor with respect to the nearby deep-water grid node. The main limitation of this method is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, a set of synthetic mareograms is calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids characterized by a coarse resolution over deep water regions and an increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). This synthetic dataset is then used to approximate the empirical parameters of the correction equation. Results of inundation estimates in several french Mediterranean harbors obtained with the fast "Green's law - derived" method are presented and compared with values given by time-consuming nested grids simulations.

  9. A quantum inspired model of radar range and range-rate measurements with applications to weak value measurements

    NASA Astrophysics Data System (ADS)

    Escalante, George

    2017-05-01

    Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.

  10. Study of different deposition parameterizations on an atmospheric mesoscale Eulerian air quality model: Madrid case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    San Jose, R.; Cortes, J.; Moreno, J.

    1996-12-31

    The importance of an adequate parameterization of the deposition process for the simulation of the three dimensional pollution fields in a mesoscale context is out of any doubt. An accurate parameterization of the deposition flux is essential for a precise determination of the flux removal and for allowing longer simulation periods of the atmospheric processes. In addition, an accurate deposition pattern will allow a much more precise diagnostic of the impact of different pollutants on the different types of terrain actually present in complex environments such as the urban ones and their environs. In this contribution, we have implemented amore » complex resistance deposition model into an Air Quality System (ANA) applied over a large city such as Madrid (Spain). The model domain is 80x100 km which is much larger than the actual urban domain. The ANA model is composed on four different modules; a meteorological module which solves numerically the Navier Stokes equations and predicts the wind, temperature and humidity three dimensional fields every time step; the emission module, which produces the emissions every hour and with a high spatial resolution (250 x 250 m) and with landuse information (for biogenic emissions) from the Landsat-5 satellite image; a photochemical modules, which is based on the CBM-IV mechanism and solved numerically by following the SMVGEAR method and finally, a deposition module which is based on the resistance approach. The resistance module takes into account the landuse classification, the global solar radiation, the humidity of the terrain, the pH of the terrain, the characteristics of the pollutant, the Leaf Area Index and the reactivity of the pollutant.« less

  11. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  12. Witnessing eigenstates for quantum simulation of Hamiltonian spectra

    PubMed Central

    Santagati, Raffaele; Wang, Jianwei; Gentile, Antonio A.; Paesani, Stefano; Wiebe, Nathan; McClean, Jarrod R.; Morley-Short, Sam; Shadbolt, Peter J.; Bonneau, Damien; Silverstone, Joshua W.; Tew, David P.; Zhou, Xiaoqi; O’Brien, Jeremy L.; Thompson, Mark G.

    2018-01-01

    The efficient calculation of Hamiltonian spectra, a problem often intractable on classical machines, can find application in many fields, from physics to chemistry. We introduce the concept of an “eigenstate witness” and, through it, provide a new quantum approach that combines variational methods and phase estimation to approximate eigenvalues for both ground and excited states. This protocol is experimentally verified on a programmable silicon quantum photonic chip, a mass-manufacturable platform, which embeds entangled state generation, arbitrary controlled unitary operations, and projective measurements. Both ground and excited states are experimentally found with fidelities >99%, and their eigenvalues are estimated with 32 bits of precision. We also investigate and discuss the scalability of the approach and study its performance through numerical simulations of more complex Hamiltonians. This result shows promising progress toward quantum chemistry on quantum computers. PMID:29387796

  13. An application of small-gap equations in sealing devices

    NASA Technical Reports Server (NTRS)

    Vionnet, Carlos A.; Heinrich, Juan C.

    1993-01-01

    The study of a thin, incompressible Newtonian fluid layer trapped between two almost parallel, sliding surfaces has been actively pursued in the last decades. This subject includes lubrication applications such as slider bearings or the sealing of non-pressurized fluids with rubber rotary shaft seals. In the present work we analyze numerically the flow of lubricant fluid through a micro-gap of sealing devices. The first stage of this study is carried out assuming that a 'small-gap' parameter delta attains an extreme value in the Navier-Stokes equations. The precise meaning of small-gap is achieved by the particular limit delta = 0 which, within the bounds of the hypotheses, predicts transport of lubricant through the sealed area by centrifugal instabilities. Numerical results obtained with the penalty function approximation in the finite element method are presented. In particular, the influence of inflow and outflow boundary conditions, and their impact in the simulated flow, are discussed.

  14. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leimkuhler, Benedict, E-mail: b.leimkuhler@ed.ac.uk; Shang, Xiaocheng, E-mail: x.shang@brown.edu

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé–Hoover–Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for anmore » important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees–Edwards boundary conditions to induce shear flow.« less

  15. Near-Field Phase-Change Optical Recording of 1.36 Numerical Aperture

    NASA Astrophysics Data System (ADS)

    Ichimura, Isao; Kishima, Koichiro; Osato, Kiyoshi; Yamamoto, Kenji; Kuroda, Yuji; Saito, Kimihiro

    2000-02-01

    A bit density of 125 nm was demonstrated through near-field phase-change (PC) optical recording at the wavelength of 657 nm by using a supersphere solid immersion lens (SIL). The lens unit consists of a standard objective and a φ2.5 mm SIL@. Since this lens size still prevents the unit from being mounted on an air-bearing slider, we developed a one-axis positioning actuator and an active capacitance servo for precise gap control to thoroughly investigate near-field recording. An electrode was fabricated on the bottom of the SIL, and a capacitor was formed facing a disk material. This setup realized a stable air gap below 50 nm, and a new method of simulating modulation transfer function (MTF) optimized the PC disk structure at this gap height. Obtained jitter of 8.8% and a clear eye-pattern prove that our system successfully attained the designed numerical-aperture (\\mathit{NA}) of 1.36.

  16. Numerical study of the polarization effect of GPR systems on the detection of buried objects

    NASA Astrophysics Data System (ADS)

    Sagnard, Florence

    2017-04-01

    This work is in line with the studies carried out in our department over the last few years on object detection in civil engineering structures and soils. In parallel to building of the second version of the Sense-City test site where several pipeline networks will be buried [1], we are developing numerical models using the FIT and the FDTD approaches to study more precisely the contribution of the polarization diversity in the detection of conductive and dielectric buried objects using the GPR technique. The simulations developed are based on a ultra-wide band SFCW GPR system that have been designed and evaluated in our laboratory. A parametric study is proposed to evaluate the influence of the antenna configurations and the antenna geometry when considering the polarization diversity in the detection and characterization of canonical objects. [1] http://www.sense-city.univ-paris-est.fr/index.php

  17. An application of small-gap equations in sealing devices

    NASA Astrophysics Data System (ADS)

    Vionnet, Carlos A.; Heinrich, Juan C.

    1993-11-01

    The study of a thin, incompressible Newtonian fluid layer trapped between two almost parallel, sliding surfaces has been actively pursued in the last decades. This subject includes lubrication applications such as slider bearings or the sealing of non-pressurized fluids with rubber rotary shaft seals. In the present work we analyze numerically the flow of lubricant fluid through a micro-gap of sealing devices. The first stage of this study is carried out assuming that a 'small-gap' parameter delta attains an extreme value in the Navier-Stokes equations. The precise meaning of small-gap is achieved by the particular limit delta = 0 which, within the bounds of the hypotheses, predicts transport of lubricant through the sealed area by centrifugal instabilities. Numerical results obtained with the penalty function approximation in the finite element method are presented. In particular, the influence of inflow and outflow boundary conditions, and their impact in the simulated flow, are discussed.

  18. Development of Numerical Methods to Estimate the Ohmic Breakdown Scenarios of a Tokamak

    NASA Astrophysics Data System (ADS)

    Yoo, Min-Gu; Kim, Jayhyun; An, Younghwa; Hwang, Yong-Seok; Shim, Seung Bo; Lee, Hae June; Na, Yong-Su

    2011-10-01

    The ohmic breakdown is a fundamental method to initiate the plasma in a tokamak. For the robust breakdown, ohmic breakdown scenarios have to be carefully designed by optimizing the magnetic field configurations to minimize the stray magnetic fields. This research focuses on development of numerical methods to estimate the ohmic breakdown scenarios by precise analysis of the magnetic field configurations. This is essential for the robust and optimal breakdown and start-up of fusion devices especially for ITER and its beyond equipped with low toroidal electric field (ET <= 0.3 V/m). A field-line-following analysis code based on the Townsend avalanche theory and a particle simulation code are developed to analyze the breakdown characteristics of actual complex magnetic field configurations including the stray magnetic fields in tokamaks. They are applied to the ohmic breakdown scenarios of tokamaks such as KSTAR and VEST and compared with experiments.

  19. Self-energy-modified Poisson-Nernst-Planck equations: WKB approximation and finite-difference approaches.

    PubMed

    Xu, Zhenli; Ma, Manman; Liu, Pei

    2014-07-01

    We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.

  20. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less

  1. Numerical studies of film formation in context of steel coating

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech; Zaleski, Stephane; Popinet, Stephane

    2017-11-01

    In this work, we present a detailed example of numerical study of film formation in the context of metal coating. Liquid metal is drawn from a reservoir onto a retracting solid sheet, forming a coating film characterized by phenomena such as longitudinal thickness variation (in 3D) or waves akin to that predicted by Kapitza and Kapitza (visible in two dimensions as well). While the industry standard configuration for Zinc coating is marked by coexistence of medium Capillary number (Ca = 0.03) and film Reynolds number above 1000, we present also parametric studies in order to establish more clearly to what degree does the numerical method influence film regimes obtained in the target configuration. The simulations have been performed using Basilisk, a grid-adapting, strongly optimized code derived from Gerris . Mesh adaptation allows for arbitrary precision in relevant regions such as the contact line or the meniscus, while a coarse grid is applied elsewhere. This adaptation strategy, as the results indicate, is the only realistic approach for numerical method to cover the wide range of necessary scales from the predicted film thickness (hundreds of microns) to the domain size (meters).

  2. Linearized lattice Boltzmann method for micro- and nanoscale flow and heat transfer.

    PubMed

    Shi, Yong; Yap, Ying Wan; Sader, John E

    2015-07-01

    Ability to characterize the heat transfer in flowing gases is important for a wide range of applications involving micro- and nanoscale devices. Gas flows away from the continuum limit can be captured using the Boltzmann equation, whose analytical solution poses a formidable challenge. An efficient and accurate numerical simulation of the Boltzmann equation is thus highly desirable. In this article, the linearized Boltzmann Bhatnagar-Gross-Krook equation is used to develop a hierarchy of thermal lattice Boltzmann (LB) models based on half-space Gaussian-Hermite (GH) quadrature ranging from low to high algebraic precision, using double distribution functions. Simplified versions of the LB models in the continuum limit are also derived, and are shown to be consistent with existing thermal LB models for noncontinuum heat transfer reported in the literature. Accuracy of the proposed LB hierarchy is assessed by simulating thermal Couette flows for a wide range of Knudsen numbers. Effects of the underlying quadrature schemes (half-space GH vs full-space GH) and continuum-limit simplifications on computational accuracy are also elaborated. The numerical findings in this article provide direct evidence of improved computational capability of the proposed LB models for modeling noncontinuum flows and heat transfer at small length scales.

  3. A theoretical study of special acoustic effects caused by the staircase of the El Castillo pyramid at the Maya ruins of Chichen-Itza in Mexico.

    PubMed

    Declercq, Nico F; Degrieck, Joris; Briers, Rudy; Leroy, Oswald

    2004-12-01

    It is known that a handclap in front of the stairs of the great pyramid of Chichen Itza produces a chirp echo which sounds more or less like the sound of a Quetzal bird. The present work describes precise diffraction simulations and attempts to answer the critical question what physical effects cause the formation of the chirp echo. Comparison is made with experimental results obtained from David Lubman. Numerical simulations show that the echo shows a strong dependence on the kind of incident sound. Simulations are performed for a (delta function like) pulse and also for a real handclap. The effect of reflections on the ground in front of the pyramid is also discussed. The present work also explains why an observer seated on the lowest step of the pyramid hears the sound of raindrops falling in a water filled bucket instead of footstep sounds when people, situated higher up the pyramid, climb the stairs.

  4. A theoretical study of special acoustic effects caused by the staircase of the El Castillo pyramid at the Maya ruins of Chichen-Itza in Mexico

    NASA Astrophysics Data System (ADS)

    Declercq, Nico F.; Degrieck, Joris; Briers, Rudy; Leroy, Oswald

    2004-12-01

    It is known that a handclap in front of the stairs of the great pyramid of Chichen Itza produces a chirp echo which sounds more or less like the sound of a Quetzal bird. The present work describes precise diffraction simulations and attempts to answer the critical question what physical effects cause the formation of the chirp echo. Comparison is made with experimental results obtained from David Lubman. Numerical simulations show that the echo shows a strong dependence on the kind of incident sound. Simulations are performed for a (delta function like) pulse and also for a real handclap. The effect of reflections on the ground in front of the pyramid is also discussed. The present work also explains why an observer seated on the lowest step of the pyramid hears the sound of raindrops falling in a water filled bucket instead of footstep sounds when people, situated higher up the pyramid, climb the stairs. .

  5. Three-dimensional numerical simulation for plastic injection-compression molding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  6. Solitary water wave interactions

    NASA Astrophysics Data System (ADS)

    Craig, W.; Guyenne, P.; Hammack, J.; Henderson, D.; Sulem, C.

    2006-05-01

    This article concerns the pairwise nonlinear interaction of solitary waves in the free surface of a body of water lying over a horizontal bottom. Unlike solitary waves in many completely integrable model systems, solitary waves for the full Euler equations do not collide elastically; after interactions, there is a nonzero residual wave that trails the post-collision solitary waves. In this report on new numerical and experimental studies of such solitary wave interactions, we verify that this is the case, both in head-on collisions (the counterpropagating case) and overtaking collisions (the copropagating case), quantifying the degree to which interactions are inelastic. In the situation in which two identical solitary waves undergo a head-on collision, we compare the asymptotic predictions of Su and Mirie [J. Fluid Mech. 98, 509 (1980)] and Byatt-Smith [J. Fluid Mech. 49, 625 (1971)], the wavetank experiments of Maxworthy [J. Fluid Mech. 76, 177 (1976)], and the numerical results of Cooker, Weidman, and Bale [J. Fluid Mech. 342, 141 (1997)] with independent numerical simulations, in which we quantify the phase change, the run-up, and the form of the residual wave and its Fourier signature in both small- and large-amplitude interactions. This updates the prior numerical observations of inelastic interactions in Fenton and Rienecker [J. Fluid Mech. 118, 411 (1982)]. In the case of two nonidentical solitary waves, our precision wavetank experiments are compared with numerical simulations, again observing the run-up, phase lag, and generation of a residual from the interaction. Considering overtaking solitary wave interactions, we compare our experimental observations, numerical simulations, and the asymptotic predictions of Zou and Su [Phys. Fluids 29, 2113 (1986)], and again we quantify the inelastic residual after collisions in the simulations. Geometrically, our numerical simulations of overtaking interactions fit into the three categories of Korteweg-deVries two-soliton solutions defined in Lax [Commun. Pure Appl. Math. 21, 467 (1968)], with, however, a modification in the parameter regime. In all cases we have considered, collisions are seen to be inelastic, although the degree to which interactions depart from elastic is very small. Finally, we give several theoretical results: (i) a relationship between the change in amplitude of solitary waves due to a pairwise collision and the energy carried away from the interaction by the residual component, and (ii) a rigorous estimate of the size of the residual component of pairwise solitary wave collisions. This estimate is consistent with the analytic results of Schneider and Wayne [Commun. Pure Appl. Math. 53, 1475 (2000)], Wright [SIAM J. Math. Anal. 37, 1161 (2005)], and Bona, Colin, and Lannes [Arch. Rat. Mech. Anal. 178, 373 (2005)]. However, in light of our numerical data, both (i) and (ii) indicate a need to reevaluate the asymptotic results in Su and Mirie [J. Fluid Mech. 98, 509 (1980)] and Zou and Su [Phys. Fluids 29, 2113 (1986)].

  7. Numerical Study Comparing RANS and LES Approaches on a Circulation Control Airfoil

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Nishino, Takafumi

    2011-01-01

    A numerical study over a nominally two-dimensional circulation control airfoil is performed using a large-eddy simulation code and two Reynolds-averaged Navier-Stokes codes. Different Coanda jet blowing conditions are investigated. In addition to investigating the influence of grid density, a comparison is made between incompressible and compressible flow solvers. The incompressible equations are found to yield negligible differences from the compressible equations up to at least a jet exit Mach number of 0.64. The effects of different turbulence models are also studied. Models that do not account for streamline curvature effects tend to predict jet separation from the Coanda surface too late, and can produce non-physical solutions at high blowing rates. Three different turbulence models that account for streamline curvature are compared with each other and with large eddy simulation solutions. All three models are found to predict the Coanda jet separation location reasonably well, but one of the models predicts specific flow field details near the Coanda surface prior to separation much better than the other two. All Reynolds-averaged Navier-Stokes computations produce higher circulation than large eddy simulation computations, with different stagnation point location and greater flow acceleration around the nose onto the upper surface. The precise reasons for the higher circulation are not clear, although it is not solely a function of predicting the jet separation location correctly.

  8. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities

    PubMed Central

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381

  9. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    PubMed

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Verifying the error bound of numerical computation implemented in computer systems

    DOEpatents

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  11. ADRC for spacecraft attitude and position synchronization in libration point orbits

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Yuan, Jianping; Zhao, Yakun

    2018-04-01

    This paper addresses the problem of spacecraft attitude and position synchronization in libration point orbits between a leader and a follower. Using dual quaternion, the dimensionless relative coupled dynamical model is derived considering computation efficiency and accuracy. Then a model-independent dimensionless cascade pose-feedback active disturbance rejection controller is designed to spacecraft attitude and position tracking control problems considering parameter uncertainties and external disturbances. Numerical simulations for the final approach phase in spacecraft rendezvous and docking and formation flying are done, and the results show high-precision tracking errors and satisfactory convergent rates under bounded control torque and force which validate the proposed approach.

  12. Towards Commissioning the Fermilab Muon G-2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratakis, D.; Convery, M. E.; Morgan, J. P.

    2017-01-01

    Starting this summer, Fermilab will host a key experiment dedicated to the search for signals of new physics: The Fermilab Muon g-2 Experiment. Its aim is to precisely measure the anomalous magnetic moment of the muon. In full operation, in order to avoid contamination, the newly born secondary beam is injected into a 505 m long Delivery Ring (DR) wherein it makes several revolutions before being sent to the experiment. Part of the commissioning scenario will execute a running mode wherein the passage from the DR will be skipped. With the aid of numerical simulations, we provide estimates of themore » expected performance.« less

  13. Numerical simulation and experiment on effect of ultrasonic in polymer extrusion processing

    NASA Astrophysics Data System (ADS)

    Wan, Yue; Fu, ZhiHong; Wei, LingJiao; Zang, Gongzheng; Zhang, Lei

    2018-01-01

    The influence of ultrasonic wave on the flow field parameters and the precision of extruded products are studied. Firstly, the effect of vibration power on the average velocity of the outlet, the average viscosity of the die section, the average shear rate and the inlet pressure of the die section were studied by using the Polyflow software. Secondly, the effects of ultrasonic strength on the die temperature and the drop of die pressure were studied experimentally by different head temperature and different screw speed. Finally, the relationship between die pressure and extrusion flow rate under different ultrasonic power were studied through experiments.

  14. Monte Carlo simulation of a noisy quantum channel with memory.

    PubMed

    Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco

    2015-10-01

    The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.

  15. Population geography of calamity: the sixteenth and seventeenth century Yucatan.

    PubMed

    Whitmore, T M

    1996-12-01

    "This historical demography for Yucatan [Mexico] at the time of Spanish contact presents a number of problems. There were multiple Maya-Spaniard contacts before the Spaniards established a continuous presence after the protracted conquest of the Yucatan. The area of Yucatan that was controlled by the Spanish at any one time is not precisely known, and Yucatan offered ¿refuge' areas where the indigenous population could avoid Spanish control and counts. These issues are addressed here by considering different regions of the Yucatan and using a numerical computer simulation to generate new estimates of population that result from migration, warfare, agricultural calamity, and epidemics." excerpt

  16. Generalized three-dimensional lattice Boltzmann color-gradient method for immiscible two-phase pore-scale imbibition and drainage in porous media

    NASA Astrophysics Data System (ADS)

    Leclaire, Sébastien; Parmigiani, Andrea; Malaspinas, Orestis; Chopard, Bastien; Latt, Jonas

    2017-03-01

    This article presents a three-dimensional numerical framework for the simulation of fluid-fluid immiscible compounds in complex geometries, based on the multiple-relaxation-time lattice Boltzmann method to model the fluid dynamics and the color-gradient approach to model multicomponent flow interaction. New lattice weights for the lattices D3Q15, D3Q19, and D3Q27 that improve the Galilean invariance of the color-gradient model as well as for modeling the interfacial tension are derived and provided in the Appendix. The presented method proposes in particular an approach to model the interaction between the fluid compound and the solid, and to maintain a precise contact angle between the two-component interface and the wall. Contrarily to previous approaches proposed in the literature, this method yields accurate solutions even in complex geometries and does not suffer from numerical artifacts like nonphysical mass transfer along the solid wall, which is crucial for modeling imbibition-type problems. The article also proposes an approach to model inflow and outflow boundaries with the color-gradient method by generalizing the regularized boundary conditions. The numerical framework is first validated for three-dimensional (3D) stationary state (Jurin's law) and time-dependent (Washburn's law and capillary waves) problems. Then, the usefulness of the method for practical problems of pore-scale flow imbibition and drainage in porous media is demonstrated. Through the simulation of nonwetting displacement in two-dimensional random porous media networks, we show that the model properly reproduces three main invasion regimes (stable displacement, capillary fingering, and viscous fingering) as well as the saturating zone transition between these regimes. Finally, the ability to simulate immiscible two-component flow imbibition and drainage is validated, with excellent results, by numerical simulations in a Berea sandstone, a frequently used benchmark case used in this field, using a complex geometry that originates from a 3D scan of a porous sandstone. The methods presented in this article were implemented in the open-source PALABOS library, a general C++ matrix-based library well adapted for massive fluid flow parallel computation.

  17. Investigating dynamic underground coal fires by means of numerical simulation

    NASA Astrophysics Data System (ADS)

    Wessling, S.; Kessels, W.; Schmidt, M.; Krause, U.

    2008-01-01

    Uncontrolled burning or smoldering of coal seams, otherwise known as coal fires, represents a worldwide natural hazard. Efficient application of fire-fighting strategies and prevention of mining hazards require that the temporal evolution of fire propagation can be sufficiently precise predicted. A promising approach for the investigation of the temporal evolution is the numerical simulation of involved physical and chemical processes. In the context of the Sino-German Research Initiative `Innovative Technologies for Detection, Extinction and Prevention of Coal Fires in North China,' a numerical model has been developed for simulating underground coal fires at large scales. The objective of such modelling is to investigate observables, like the fire propagation rate, with respect to the thermal and hydraulic parameters of adjacent rock. In the model, hydraulic, thermal and chemical processes are accounted for, with the last process complemented by laboratory experiments. Numerically, one key challenge in modelling coal fires is to circumvent the small time steps resulting from the resolution of fast reaction kinetics at high temperatures. In our model, this problem is solved by means of an `operator-splitting' approach, in which transport and reactive processes of oxygen are independently calculated. At high temperatures, operator-splitting has the decisive advantage of allowing the global time step to be chosen according to oxygen transport, so that time-consuming simulation through the calculation of fast reaction kinetics is avoided. Also in this model, because oxygen distribution within a coal fire has been shown to remain constant over long periods, an additional extrapolation algorithm for the coal concentration has been applied. In this paper, we demonstrate that the operator-splitting approach is particularly suitable for investigating the influence of hydraulic parameters of adjacent rocks on coal fire propagation. A study shows that dynamic propagation strongly depends on permeability variations. For the assumed model, no fire exists for permeabilities k < 10-10m2, whereas the fire propagation velocity ranges between 340ma-1 for k = 10-8m2, and drops to lower than 3ma-1 for k = 5 × 10-10m2. Additionally, strong temperature variations are observed for the permeability range 5 × 10-10m2 < k < 10-8m2.

  18. Precise determination of the heat delivery during in vivo magnetic nanoparticle hyperthermia with infrared thermography

    NASA Astrophysics Data System (ADS)

    Rodrigues, Harley F.; Capistrano, Gustavo; Mello, Francyelli M.; Zufelato, Nicholas; Silveira-Lacerda, Elisângela; Bakuzis, Andris F.

    2017-05-01

    Non-invasive and real-time monitoring of the heat delivery during magnetic nanoparticle hyperthermia (MNH) is of fundamental importance to predict clinical outcomes for cancer treatment. Infrared thermography (IRT) can determine the surface temperature due to three-dimensional heat delivery inside a subcutaneous tumor, an argument that is supported by numerical simulations. However, for precise temperature determination, it is of crucial relevance to use a correct experimental configuration. This work reports an MNH study using a sarcoma 180 murine tumor containing 3.9 mg of intratumorally injected manganese-ferrite nanoparticles. MNH was performed at low field amplitude and non-uniform field configuration. Five 30 min in vivo magnetic hyperthermia experiments were performed, monitoring the surface temperature with a fiber optical sensor and thermal camera at distinct angles with respect to the animal’s surface. The results indicate that temperature errors as large as 7~\\circ C can occur if the experiment is not properly designed. A new IRT error model is found to explain the data. More importantly, we show how to precisely monitor temperature with IRT during hyperthermia, which could positively impact heat dosimetry and clinical planning.

  19. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    NASA Technical Reports Server (NTRS)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  20. Precise determination of the heat delivery during in vivo magnetic nanoparticle hyperthermia with infrared thermography.

    PubMed

    Rodrigues, Harley F; Capistrano, Gustavo; Mello, Francyelli M; Zufelato, Nicholas; Silveira-Lacerda, Elisângela; Bakuzis, Andris F

    2017-05-21

    Non-invasive and real-time monitoring of the heat delivery during magnetic nanoparticle hyperthermia (MNH) is of fundamental importance to predict clinical outcomes for cancer treatment. Infrared thermography (IRT) can determine the surface temperature due to three-dimensional heat delivery inside a subcutaneous tumor, an argument that is supported by numerical simulations. However, for precise temperature determination, it is of crucial relevance to use a correct experimental configuration. This work reports an MNH study using a sarcoma 180 murine tumor containing 3.9 mg of intratumorally injected manganese-ferrite nanoparticles. MNH was performed at low field amplitude and non-uniform field configuration. Five 30 min in vivo magnetic hyperthermia experiments were performed, monitoring the surface temperature with a fiber optical sensor and thermal camera at distinct angles with respect to the animal's surface. The results indicate that temperature errors as large as [Formula: see text]C can occur if the experiment is not properly designed. A new IRT error model is found to explain the data. More importantly, we show how to precisely monitor temperature with IRT during hyperthermia, which could positively impact heat dosimetry and clinical planning.

  1. Rapid inundation estimates at harbor scale using tsunami wave heights offshore simulation and coastal amplification laws

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Loevenbruck, A.; Hebert, H.

    2013-12-01

    Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).

  2. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    NASA Astrophysics Data System (ADS)

    Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando

    2016-12-01

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.

  3. Evaluation of the entropy consistent euler flux on 1D and 2D test problems

    NASA Astrophysics Data System (ADS)

    Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad

    2012-06-01

    Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.

  4. Towards precision constraints on gravity with the Effective Field Theory of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Koyama, Kazuya; Lewandowski, Matthew; Vernizzi, Filippo; Winther, Hans A.

    2018-04-01

    We compare analytical computations with numerical simulations for dark-matter clustering, in general relativity and in the normal branch of DGP gravity (nDGP). Our analytical frameword is the Effective Field Theory of Large-Scale Structure (EFTofLSS), which we use to compute the one-loop dark-matter power spectrum, including the resummation of infrared bulk displacement effects. We compare this to a set of 20 COLA simulations at redshifts z = 0, z = 0.5, and z = 1, and fit the free parameter of the EFTofLSS, called the speed of sound, in both ΛCDM and nDGP at each redshift. At one-loop at z = 0, the reach of the EFTofLSS is kreach ≈ 0.14 Mpc‑1 for both ΛCDM and nDGP. Along the way, we compare two different infrared resummation schemes and two different treatments of the time dependence of the perturbative expansion, concluding that they agree to approximately 1% over the scales of interest. Finally, we use the ratio of the COLA power spectra to make a precision measurement of the difference between the speeds of sound in ΛCDM and nDGP, and verify that this is proportional to the modification of the linear coupling constant of the Poisson equation.

  5. Precision in ground-based solar polarimetry: simulating the role of adaptive optics.

    PubMed

    Krishnappa, Nagaraju; Feller, Alex

    2012-11-20

    Accurate measurement of polarization in spectral lines is important for the reliable inference of magnetic fields on the Sun. For ground-based observations, polarimetric precision is severely limited by the presence of Earth's atmosphere. Atmospheric turbulence (seeing) produces signal fluctuations, which combined with the nonsimultaneous nature of the measurement process cause intermixing of the Stokes parameters known as seeing-induced polarization cross talk. Previous analysis of this effect [Appl. Opt. 43, 3817 (2004)] suggests that cross talk is reduced not only with increase in modulation frequency but also by compensating the seeing-induced image aberrations by an adaptive optics (AO) system. However, in those studies the effect of higher-order image aberrations than those corrected by the AO system was not taken into account. We present in this paper an analysis of seeing-induced cross talk in the presence of higher-order image aberrations through numerical simulation. In this analysis we find that the amount of cross talk among Stokes parameters is practically independent of the degree of image aberration corrected by an AO system. However, higher-order AO corrections increase the signal-to-noise ratio by reducing the seeing caused image smearing. Further we find, in agreement with the earlier results, that cross talk is reduced considerably by increasing the modulation frequency.

  6. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  7. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  8. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  9. Free energy calculation of single molecular interaction using Jarzynski's identity method: the case of HIV-1 protease inhibitor system

    NASA Astrophysics Data System (ADS)

    Li, De-Chang; Ji, Bao-Hua

    2012-06-01

    Jarzynski' identity (JI) method was suggested a promising tool for reconstructing free energy landscape of biomolecular interactions in numerical simulations and experiments. However, JI method has not yet been well tested in complex systems such as ligand-receptor molecular pairs. In this paper, we applied a huge number of steered molecular dynamics (SMD) simulations to dissociate the protease of human immunodeficiency type I virus (HIV-1 protease) and its inhibitors. We showed that because of intrinsic complexity of the ligand-receptor system, the energy barrier predicted by JI method at high pulling rates is much higher than experimental results. However, with a slower pulling rate and fewer switch times of simulations, the predictions of JI method can approach to the experiments. These results suggested that the JI method is more appropriate for reconstructing free energy landscape using the data taken from experiments, since the pulling rates used in experiments are often much slower than those in SMD simulations. Furthermore, we showed that a higher loading stiffness can produce higher precision of calculation of energy landscape because it yields a lower mean value and narrower bandwidth of work distribution in SMD simulations.

  10. An in-depth analysis of temperature effect on DIBL in UTBB FD SOI MOSFETs based on experimental data, numerical simulations and analytical models

    NASA Astrophysics Data System (ADS)

    Pereira, A. S. N.; de Streel, G.; Planes, N.; Haond, M.; Giacomini, R.; Flandre, D.; Kilchytska, V.

    2017-02-01

    The Drain Induced Barrier Lowering (DIBL) behavior in Ultra-Thin Body and Buried oxide (UTBB) transistors is investigated in details in the temperature range up to 150 °C, for the first time to the best of our knowledge. The analysis is based on experimental data, physical device simulation, compact model (SPICE) simulation and previously published models. Contrary to MASTAR prediction, experiments reveal DIBL increase with temperature. Physical device simulations of different thin-film fully-depleted (FD) devices outline the generality of such behavior. SPICE simulations, with UTSOI DK2.4 model, only partially adhere to experimental trends. Several analytic models available in the literature are assessed for DIBL vs. temperature prediction. Although being the closest to experiments, Fasarakis' model overestimates DIBL(T) dependence for shortest devices and underestimates it for upsized gate lengths frequently used in ultra-low-voltage (ULV) applications. This model is improved in our work, by introducing a temperature-dependent inversion charge at threshold. The improved model shows very good agreement with experimental data, with high gain in precision for the gate lengths under test.

  11. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  12. Validation of the Six Sigma Z-score for the quality assessment of clinical laboratory timeliness.

    PubMed

    Ialongo, Cristiano; Bernardini, Sergio

    2018-03-28

    The International Federation of Clinical Chemistry and Laboratory Medicine has introduced in recent times the turnaround time (TAT) as mandatory quality indicator for the postanalytical phase. Classic TAT indicators, namely, average, median, 90th percentile and proportion of acceptable test (PAT), are in use since almost 40 years and to date represent the mainstay for gauging the laboratory timeliness. In this study, we investigated the performance of the Six Sigma Z-score, which was previously introduced as a device for the quantitative assessment of timeliness. A numerical simulation was obtained modeling the actual TAT data set using the log-logistic probability density function. Five thousand replicates for each size of the artificial TAT random sample (n=20, 50, 250 and 1000) were generated, and different laboratory conditions were simulated manipulating the PDF in order to generate more or less variable data. The Z-score and the classic TAT indicators were assessed for precision (%CV), robustness toward right-tailing (precision at different sample variability), sensitivity and specificity. Z-score showed sensitivity and specificity comparable to PAT (≈80% with n≥250), but superior precision that ranged within 20% by moderately small sized samples (n≥50); furthermore, Z-score was less affected by the value of the cutoff used for setting the acceptable TAT, as well as by the sample variability that reflected into the magnitude of right-tailing. The Z-score was a valid indicator of laboratory timeliness and a suitable device to improve as well as to maintain the achieved quality level.

  13. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  14. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  15. Track reconstruction in the inhomogeneous magnetic field for Vertex Detector of NA61/SHINE experiment at CERN SPS

    NASA Astrophysics Data System (ADS)

    Merzlaya, Anastasia; NA61/SHINE Collaboration

    2017-01-01

    The heavy-ion programme of the NA61/SHINE experiment at CERN SPS is expanding to allow precise measurements of exotic particles with lifetime few hundred microns. A Vertex Detector for open charm measurements at the SPS is being constructed by the NA61/SHINE Collaboration to meet the challenges of high spatial resolution of secondary vertices and efficiency of track registration. This task is solved by the application of the coordinate sensitive CMOS Monolithic Active Pixel Sensors with extremely low material budget in the new Vertex Detector. A small-acceptance version of the Vertex Detector is being tested this year, later it will be expanded to a large-acceptance version. Simulation studies will be presented. A method of track reconstruction in the inhomogeneous magnetic field for the Vertex Detector was developed and implemented. Numerical calculations show the possibility of high precision measurements in heavy ion collisions of strange and multi strange particles, as well as heavy flavours, like charmed particles.

  16. An analysis of the Hubble Space Telescope fine guidance sensor fine lock mode

    NASA Technical Reports Server (NTRS)

    Taff, L. G.

    1991-01-01

    There are two guiding modes of the Hubble Space Telescope (HST) used for the acquisition of astronomical data by one of its six scientific instruments. The more precise one is called Fine Lock. Command and control problems in the onboard electronics has limited Fine Lock to brighter stars, V less than 13.0 mag, instead of fulfilling its goal of V = 14.5 mag. Consequently, the less precise guiding mode of Coarse Track (approximately 40 milli-arc seconds) has to be used fairly frequently. Indeed, almost half of the scientific observations to have been made with the HST will be compromised. The only realistic or extensive simulations of the Fine Lock guidance mode are reported. The theoretical analysis underlying the Monte Carlo experiments and the numerical computations clearly show both that the control electronics are severely under-engineered and how to adjust the various control parameters to successfully extend Fine Lock guiding performance back to V = 14.0 mag and sometimes beyond.

  17. Performance Analysis for the New g-2 Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratakis, Diktys; Convery, Mary; Crmkovic, J.

    2016-06-01

    The new g-2 experiment at Fermilab aims to measure the muon anomalous magnetic moment to a precision of ±0.14 ppm - a fourfold improvement over the 0.54 ppm precision obtained in the g-2 BNL E821experiment. Achieving this goal requires the delivery of highly polarized 3.094 GeV/c muons with a narrow ±0.5% Δp/p acceptance to the g-2 storage ring. In this study, we describe a muon capture and transport scheme that should meet this requirement. First, we present the conceptual design of our proposed scheme wherein we describe its basic features. Then, we detail its performance numerically by simulating the pionmore » production in the (g-2) production target, the muon collection by the downstream beamline optics as well as the beam polarization and spin-momentum correlation up to the storage ring. The sensitivity in performance of our proposed channel against key parameters such as magnet apertures and magnet positioning errors is analyzed« less

  18. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  19. 3D shape measurements with a single interferometric sensor for in-situ lathe monitoring

    NASA Astrophysics Data System (ADS)

    Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.

    2015-05-01

    Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.

  20. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  1. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    NASA Technical Reports Server (NTRS)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  2. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.

  3. Electric field numerical simulation of disc type electrostatic spinning spinneret

    NASA Astrophysics Data System (ADS)

    Wei, L.; Deng, ZL; Qin, XH; Liang, ZY

    2018-01-01

    Electrospinning is a new type of free-end spinning built on electric field. Different from traditional single needle spinneret, in this study, a new disc type free surface spinneret is used to produce multiple jets, this will greatly improve production efficiency of nanofiber. The electric-field distribution of spinneret is the crux of the formation and trajectory of jets. In order to probe the electric field intensity of the disc type spinneret, computational software of Ansoft Maxwell 12 is adopted for a precise and intuitive analysis. The results showed that the whole round cambered surface of the spinning solution at edge of each layer of the spinneret with the maximum curvature has the highest electric field intensity, and through the simulation of the electric field distribution of different spinneret parameters such as layer, the height and radius of the spinneret. Influences of various parameters on the electrostatic spinning are obtained.

  4. Application of Spontaneous Raman Scattering to the Flowfield in a Scramjet Combustor

    NASA Astrophysics Data System (ADS)

    Sander, T.; Sattelmayer, T.

    2002-07-01

    For the investigation of the ignition and reaction of fuel injected into the combustor of a Scramjet at a flight Mach number of 8 high temperature test air at supersonic speed is required. One economic possibility to simulate these inlet conditions experimentally is the use of vitiators which preheat the air by the burning of hydrogen. Downstream of the precombustor the flow is accelerated in a Laval nozzle to a Mach number of 2.15 and enters the combustor. For the numerical simulation of a supersonic reacting flow precise information concerning the physical properties during ignition and reaction are required. Optical measurements are best suited for delivering this information as they do not disturb the supersonic flow like probes and as their application is not limited by thermal stress. Raman scattering offers the possibility of measuring the static temperature and the concentration of majority species.

  5. Compact and controlled microfluidic mixing and biological particle capture

    NASA Astrophysics Data System (ADS)

    Ballard, Matthew; Owen, Drew; Mills, Zachary Grant; Hesketh, Peter J.; Alexeev, Alexander

    2016-11-01

    We use three-dimensional simulations and experiments to develop a multifunctional microfluidic device that performs rapid and controllable microfluidic mixing and specific particle capture. Our device uses a compact microfluidic channel decorated with magnetic features. A rotating magnetic field precisely controls individual magnetic microbeads orbiting around the features, enabling effective continuous-flow mixing of fluid streams over a compact mixing region. We use computer simulations to elucidate the underlying physical mechanisms that lead to effective mixing and compare them with experimental mixing results. We study the effect of various system parameters on microfluidic mixing to design an efficient micromixer. We also experimentally and numerically demonstrate that orbiting microbeads can effectively capture particles transported by the fluid, which has major implications in pre-concentration and detection of biological particles including various cells and bacteria, with applications in areas such as point-of-care diagnostics, biohazard detection, and food safety. Support from NSF and USDA is gratefully acknowledged.

  6. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    PubMed

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  7. A novel left heart simulator for the multi-modality characterization of native mitral valve geometry and fluid mechanics.

    PubMed

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P

    2013-02-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.

  8. A Novel Left Heart Simulator for the Multi-modality Characterization of Native Mitral Valve Geometry and Fluid Mechanics

    PubMed Central

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.

    2012-01-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640

  9. The application of electrolytic photoetching and photopolishing to AISI 304 stainless steel and the electrolytic photoetching of amorphous cobalt alloy

    NASA Astrophysics Data System (ADS)

    Thomaz, Marita Duarte Canhao da Silva Pereira Fernandes

    The results presented cover broad aspects of a quantitative investigation into the elecrolytic etching and polishing of metals and alloys through photographically produced dielectric stencils (Photoresists). A study of the potential field generated between a cathode and relatively smaller anode sites as those defined by a dielectric stencil was carried out. Numerical, analytical and graphical methods yielded answers to the factors determining lateral dissolution (undercut) at the anode/stencil interface. A quasi steady state numerical model simulating the transient behavior of the partially masked electrodes undergoing dissolution was obtained. AISI 304 stainless steel was electrolytically photoetched in 10% w/w HCl electrolyte. The optimised process parameters were utilised for quantifying the effects of galvanostatic etching of the anode as that defined by a relatively narrow adherent resist stencil. Stainless steel was also utilised in investigating electrolytic photopolishing. A polishing electrolyte (orthophosphoric acid-glycerol) was modified by the addition of a surfactant which yielded surface texture values of 70nm (Ra) and high levels of specular reflectance. These results were used in the production of features upon the metal surface through photographically produced precision stencils. The process was applied to the production of edge filters requiring high quality surface textures in precision recesses. Some of the new amorphous material exhibited high resistance to dissolution in commercially used spray etching formulations. One of these materials is a cobalt based alloy produced by chill block spinning. This material was also investigated and electro etched in 10% w/w HCl solution. Although passivity was not overcome, by selecting suitable operating parameters the successful electro photoetching of precision magnetic recording head laminations was achieved. Similarly, a polycrystalline nickel based alloy also exhibiting passivity in commercially used etchants was successfully etched in the above electrolyte.

  10. Discrete Analysis of Damage and Shear Banding in Argillaceous Rocks

    NASA Astrophysics Data System (ADS)

    Dinç, Özge; Scholtès, Luc

    2018-05-01

    A discrete approach is proposed to study damage and failure processes taking place in argillaceous rocks which present a transversely isotropic behavior. More precisely, a dedicated discrete element method is utilized to provide a micromechanical description of the mechanisms involved. The purpose of the study is twofold: (1) presenting a three-dimensional discrete element model able to simulate the anisotropic macro-mechanical behavior of the Callovo-Oxfordian claystone as a particular case of argillaceous rocks; (2) studying how progressive failure develops in such material. Material anisotropy is explicitly taken into account in the numerical model through the introduction of weakness planes distributed at the interparticle scale following predefined orientation and intensity. Simulations of compression tests under plane-strain and triaxial conditions are performed to clarify the development of damage and the appearance of shear bands through micromechanical analyses. The overall mechanical behavior and shear banding patterns predicted by the numerical model are in good agreement with respect to experimental observations. Both tensile and shear microcracks emerging from the modeling also present characteristics compatible with microstructural observations. The numerical results confirm that the global failure of argillaceous rocks is well correlated with the mechanisms taking place at the local scale. Specifically, strain localization is shown to directly result from shear microcracking developing with a preferential orientation distribution related to the orientation of the shear band. In addition, localization events presenting characteristics similar to shear bands are observed from the early stages of the loading and might thus be considered as precursors of strain localization.

  11. EFT of large scale structures in redshift space [On the EFT of large scale structures in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco

    Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less

  12. EFT of large scale structures in redshift space [On the EFT of large scale structures in redshift space

    DOE PAGES

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; ...

    2018-03-15

    Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less

  13. Three-dimensional representation of the human cochlea using micro-computed tomography data: presenting an anatomical model for further numerical calculations.

    PubMed

    Braun, Katharina; Böhnke, Frank; Stark, Thomas

    2012-06-01

    We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.

  14. Finite element analysis and simulation of rheological properties of bulk molding compound (BMC)

    NASA Astrophysics Data System (ADS)

    Ergin, M. Fatih; Aydin, Ismail

    2013-12-01

    Bulk molding compound (BMC) is one of the important composite materials with various engineering applications. BMC is a thermoset plastic resin blend of various inert fillers, fiber reinforcements, catalysts, stabilizers and pigments that form a viscous, molding compound. Depending on the end-use application, bulk molding compounds are formulated to achieve close dimensional control, flame and scratch resistance, electrical insulation, corrosion and stain resistance, superior mechanical properties, low shrink and color stability. Its excellent flow characteristics, dielectric properties, and flame resistance make this thermoset material well-suited to a wide variety of applications requiring precision in detail and dimensions as well as high performance. When a BMC is used for these purposes, the rheological behavior and properties of the BMC is the main concern. In this paper, finite element analysis of rheological properties of bulk molding composite material was studied. For this purpose, standard samples of composite material were obtained by means of uniaxial hot pressing. 3 point flexural tests were then carried out by using a universal testing machine. Finite element analyses were then performed with defined material properties within a specific constitutive material behavior. Experimental and numerical results were then compared. Good correlation between the numerical simulation and the experimental results was obtained. It was expected with this study that effects of various process parameters and boundary conditions on the rheological behavior of bulk molding compounds could be determined by means of numerical analysis without detailed experimental work.

  15. Preschoolers' precision of the approximate number system predicts later school mathematics performance.

    PubMed

    Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.

  16. Preschoolers' Precision of the Approximate Number System Predicts Later School Mathematics Performance

    PubMed Central

    Mazzocco, Michèle M. M.; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities. PMID:21935362

  17. Notes From the Field: Secondary Task Precision for Cognitive Load Estimation During Virtual Reality Surgical Simulation Training.

    PubMed

    Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W

    2016-03-01

    Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.

  18. A comparative study of integrators for constructing ephemerides with high precision.

    NASA Astrophysics Data System (ADS)

    Huang, Tian-Yi

    1990-09-01

    There are four indexes for evaluating various integrators. They are the local truncation error, the numerical stability, the complexity of computation and the quality of adaptation. A review and a comparative study of several numerical integration methods, such as Adams, Cowell, Runge-Kutta-Fehlberg, Gragg-Bulirsch-Stoer extrapolation, Everhart, Taylor series and Krogh, which are popular for constructing ephemerides with high precision, has been worked out.

  19. Evaluating performance of stormwater sampling approaches using a dynamic watershed model.

    PubMed

    Ackerman, Drew; Stein, Eric D; Ritter, Kerry J

    2011-09-01

    Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.

  20. The empirical Bayes estimators of fine-scale population structure in high gene flow species.

    PubMed

    Kitada, Shuichi; Nakamichi, Reiichiro; Kishino, Hirohisa

    2017-11-01

    An empirical Bayes (EB) pairwise F ST estimator was previously introduced and evaluated for its performance by numerical simulation. In this study, we conducted coalescent simulations and generated genetic population structure mechanistically, and compared the performance of the EBF ST with Nei's G ST , Nei and Chesser's bias-corrected G ST (G ST_NC ), Weir and Cockerham's θ (θ WC ) and θ with finite sample correction (θ WC_F ). We also introduced EB estimators for Hedrick' G' ST and Jost' D. We applied these estimators to publicly available SNP genotypes of Atlantic herring. We also examined the power to detect the environmental factors causing the population structure. Our coalescent simulations revealed that the finite sample correction of θ WC is necessary to assess population structure using pairwise F ST values. For microsatellite markers, EBF ST performed the best among the present estimators regarding both bias and precision under high gene flow scenarios (FST≤0.032). For 300 SNPs, EBF ST had the highest precision in all cases, but the bias was negative and greater than those for G ST_NC and θ WC_F in all cases. G ST_NC and θ WC_F performed very similarly at all levels of F ST . As the number of loci increased up to 10 000, the precision of G ST_NC and θ WC_F became slightly better than for EBF ST for cases with FST≥0.004, even though the size of the bias remained constant. The EB estimators described the fine-scale population structure of the herring and revealed that ~56% of the genetic differentiation was caused by sea surface temperature and salinity. The R package finepop for implementing all estimators used here is available on CRAN. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  1. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  2. Microhartree precision in density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia

    2018-04-01

    To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.

  3. Number-Density Measurements of CO2 in Real Time with an Optical Frequency Comb for High Accuracy and Precision

    NASA Astrophysics Data System (ADS)

    Scholten, Sarah K.; Perrella, Christopher; Anstie, James D.; White, Richard T.; Al-Ashwal, Waddah; Hébert, Nicolas Bourbeau; Genest, Jérôme; Luiten, Andre N.

    2018-05-01

    Real-time and accurate measurements of gas properties are highly desirable for numerous real-world applications. Here, we use an optical-frequency comb to demonstrate absolute number-density and temperature measurements of a sample gas with state-of-the-art precision and accuracy. The technique is demonstrated by measuring the number density of 12C16O2 with an accuracy of better than 1% and a precision of 0.04% in a measurement and analysis cycle of less than 1 s. This technique is transferable to numerous molecular species, thus offering an avenue for near-universal gas concentration measurements.

  4. Wave optics simulation of atmospheric turbulence and reflective speckle effects in carbon dioxide lidar

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas Harold

    Laser speckle can influence lidar measurements from a diffuse hard target. Atmospheric optical turbulence will also affect the lidar return signal. This investigation develops a numerical simulation that models the propagation of a lidar beam and accounts for both reflective speckle and atmospheric turbulence effects. The simulation, previously utilized to simulate the effects of atmospheric optical turbulence alone, is based on implementing a Huygens-Fresnel approximation to laser propagation. A series of phase screens, with the appropriate atmospheric statistical characteristics, is used to simulate the effect of atmospheric optical turbulence. A single random phase screen is used to simulate scattering of the entire beam from a rough surface. These investigations compare the output of the numerical model with separate CO2 lidar measurements of atmospheric turbulence and reflective speckle. This work also compares the output of the model with separate analytical predictions for atmospheric turbulence and reflective speckle. Good agreement is found between the model and the experimental data. Good agreement is also found with analytical predictions. Additionally, results of simulation of the combined effects on a finite aperture lidar system show agreement with experimental observations of increasing RMS noise with increasing turbulence level and the behavior of the experimental integrated intensity probability distribution. Simulation studies are included that demonstrate the usefulness of the model, examine its limitations and provide greater insight into the process of combined atmospheric optical turbulence and reflective speckle. One highlight of these studies is examination of the limitations of the simulation that shows, in general, precision increases with increasing grid size. The study of the backscatter intensity enhancement predicted by analytical theory show it to behave as a multi-path effect, like scintillation, with the highest contributions from atmospheric optical turbulence weighted at the middle of the propagation path. Aperture geometry also affects the signal-to-noise ratio with thin annular apertures exhibiting lower RMS noise than circular apertures of the same active area. The simulation is capable of studying a variety of lidar schemes including varying atmospheric optical turbulence along the propagation path as well as diverse transmitter and receiver geometries.

  5. Acoustic mode coupling induced by shallow water nonlinear internal waves: sensitivity to environmental conditions and space-time scales of internal waves.

    PubMed

    Colosi, John A

    2008-09-01

    While many results have been intuited from numerical simulation studies, the precise connections between shallow-water acoustic variability and the space-time scales of nonlinear internal waves (NLIWs) as well as the background environmental conditions have not been clearly established analytically. Two-dimensional coupled mode propagation through NLIWs is examined using a perturbation series solution in which each order n is associated with nth-order multiple scattering. Importantly, the perturbation solution gives resonance conditions that pick out specific NLIW scales that cause coupling, and seabed attenuation is demonstrated to broaden these resonances, fundamentally changing the coupling behavior at low frequency. Sound-speed inhomogeneities caused by internal solitary waves (ISWs) are primarily considered and the dependence of mode coupling on ISW amplitude, range width, depth structure, location relative to the source, and packet characteristics are delineated as a function of acoustic frequency. In addition, it is seen that significant energy transfer to modes with initially low or zero energy involves at least a second order scattering process. Under moderate scattering conditions, comparisons of first order, single scattering theoretical predictions to direct numerical simulation demonstrate the accuracy of the approach for acoustic frequencies upto 400 Hz and for single as well as multiple ISW wave packets.

  6. Numerical simulation of soft palate movement and airflow in human upper airway by fluid-structure interaction method

    NASA Astrophysics Data System (ADS)

    Sun, Xiuzhen; Yu, Chi; Wang, Yuefang; Liu, Yingxi

    2007-08-01

    In this paper, the authors present airflow field characteristics of human upper airway and soft palate movement attitude during breathing. On the basis of the data taken from the spiral computerized tomography images of a healthy person and a patient with Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS), three-dimensional models of upper airway cavity and soft palate are reconstructed by the method of surface rendering. Numerical simulation is performed for airflow in the upper airway and displacement of soft palate by fluid-structure interaction analysis. The reconstructed three-dimensional models precisely preserve the original configuration of upper airways and soft palate. The results of the pressure and velocity distributions in the airflow field are quantitatively determined, and the displacement of soft palate is presented. Pressure gradients of airway are lower for the healthy person and the airflow distribution is quite uniform in the case of free breathing. However, the OSAHS patient remarkably escalates both the pressure and velocity in the upper airway, and causes higher displacement of the soft palate. The present study is useful in revealing pathogenesis and quantitative mutual relationship between configuration and function of the upper airway as well as in diagnosing diseases related to anatomical structure and function of the upper airway.

  7. A scheme for synchronizing clocks connected by a packet communication network

    NASA Astrophysics Data System (ADS)

    dos Santos, R. V.; Monteiro, L. H. A.

    2012-07-01

    Consider a communication system in which a transmitter equipment sends fixed-size packets of data at a uniform rate to a receiver equipment. Consider also that these equipments are connected by a packet-switched network, which introduces a random delay to each packet. Here we propose an adaptive clock recovery scheme able of synchronizing the frequencies and the phases of these devices, within specified limits of precision. This scheme for achieving frequency and phase synchronization is based on measurements of the packet arrival times at the receiver, which are used to control the dynamics of a digital phase-locked loop. The scheme performance is evaluated via numerical simulations performed by using realistic parameter values.

  8. Devil's staircases and continued fractions in Josephson junctions

    NASA Astrophysics Data System (ADS)

    Shukrinov, Yu. M.; Medvedeva, S. Yu.; Botha, A. E.; Kolahchi, M. R.; Irie, A.

    2013-12-01

    Detailed numerical simulations of the IV characteristics of a Josephson junction under external electromagnetic radiation show the devil's staircase within different bias current intervals. We have found that the observed steps form very precisely continued fractions. Increase of the amplitude of the radiation shifts the devil's staircase to higher Shapiro steps. An algorithm for the appearance and detection of subharmonics with increasing radiation amplitude is proposed. We demonstrate that the subharmonic steps registered in the well-known experiments by Dayem and Wiegand [Phys. Rev. 155, 419 (1967), 10.1103/PhysRev.155.419] and Clarke [Phys. Rev. B 4, 2963 (1971), 10.1103/PhysRevB.4.2963] also form continued fractions.

  9. Quantum resonances of Landau damping in the electromagnetic response of metallic nanoslabs.

    PubMed

    Castillo-López, S G; Makarov, N M; Pérez-Rodríguez, F

    2018-05-15

    The resonant quantization of Landau damping in far-infrared absorption spectra of metal nano-thin films is predicted within the Kubo formalism. Specifically, it is found that the discretization of the electromagnetic and electron wave numbers inside a metal nanoslab produces quantum nonlocal resonances well-resolved at slab thicknesses smaller than the electromagnetic skin depth. Landau damping manifests itself precisely as such resonances, tracing the spectral curve obtained within the semiclassical Boltzmann approach. For slab thicknesses much greater than the skin depth, the classical regime emerges. Here the results of the quantum model and the Boltzmann approach coincide. Our analytical study is in perfect agreement with corresponding numerical simulations.

  10. Reflection and transmission coefficients for guided waves reflected by defects in viscoelastic material plates.

    PubMed

    Hosten, Bernard; Moreau, Ludovic; Castaings, Michel

    2007-06-01

    The paper presents a Fourier transform-based signal processing procedure for quantifying the reflection and transmission coefficients and mode conversion of guided waves diffracted by defects in plates made of viscoelastic materials. The case of the S(0) Lamb wave mode incident on a notch in a Perspex plate is considered. The procedure is applied to numerical data produced by a finite element code that simulates the propagation of attenuated guided modes and their diffraction by the notch, including mode conversion. Its validity and precision are checked by the way of the energy balance computation and by comparison with results obtained using an orthogonality relation-based processing method.

  11. Quantum-Noise-Limited Sensitivity-Enhancement of a Passive Optical Cavity by a Fast-Light Medium

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna

    2016-01-01

    We demonstrate for a passive optical cavity containing an intracavity dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noiselimited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantumnoise- limited measurement precision, by temperature tuning a low-pressure vapor of noninteracting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.

  12. Quantum-Noise-Limited Sensitivity Enhancement of a Passive Optical Cavity by a Fast-Light Medium

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Luckay, H. A.; Chang, Hongrok; Myneni, Krishna

    2016-01-01

    We demonstrate for a passive optical cavity containing a dispersive atomic medium, the increase in scale factor near the critical anomalous dispersion is not cancelled by mode broadening or attenuation, resulting in an overall increase in the predicted quantum-noise-limited sensitivity. Enhancements of over two orders of magnitude are measured in the scale factor, which translates to greater than an order-of-magnitude enhancement in the predicted quantum-noise-limited measurement precision, by temperature tuning a low-pressure vapor of non-interacting atoms in a low-finesse cavity close to the critical anomalous dispersion condition. The predicted enhancement in sensitivity is confirmed through Monte-Carlo numerical simulations.

  13. Versatile multi-wavelength ultrafast fiber laser mode-locked by carbon nanotubes

    PubMed Central

    Liu, Xueming; Han, Dongdong; Sun, Zhipei; Zeng, Chao; Lu, Hua; Mao, Dong; Cui, Yudong; Wang, Fengqiu

    2013-01-01

    Multi-wavelength lasers have widespread applications (e.g. fiber telecommunications, pump-probe measurements, terahertz generation). Here, we report a nanotube-mode-locked all-fiber ultrafast oscillator emitting three wavelengths at the central wavelengths of about 1540, 1550, and 1560 nm, which are tunable by stretching fiber Bragg gratings. The output pulse duration is around 6 ps with a spectral width of ~0.5 nm, agreeing well with the numerical simulations. The triple-laser system is controlled precisely and insensitive to environmental perturbations with <0.04% amplitude fluctuation. Our method provides a simple, stable, low-cost, multi-wavelength ultrafast-pulsed source for spectroscopy, biomedical research and telecommunications. PMID:24056500

  14. Gas filter radiometer for carbon monoxide measurements during the 1979 Summer Monsoon Experiment (MONEX)

    NASA Technical Reports Server (NTRS)

    Reichle, H. G., Jr.; Wallio, H. A.; Casas, J. C.; Condon, E. P.

    1986-01-01

    The instrumental and data-reduction techniques used in obtaining remote measurements of carbon monoxide during the 1979 Summer Monsoon Experiment are described. The form of the signal function (the variation of signal with altitude) and the impact of variations in the vertical distribution of carbon monoxide are discussed. Estimates of the experimental accuracy are made both by assessment of error sources through the use of numerical simulations and by comparison with concurrent measurements made by means of gas chromatography. It is found that the radiometric measurements tend to be about 9 percent lower than the direct measurements and to have a precision of about 8 percent.

  15. 3D light harnessing based on coupling engineering between 1D-2D Photonic Crystal membranes and metallic nano-antenna.

    PubMed

    Belarouci, Ali; Benyattou, Taha; Letartre, Xavier; Viktorovitch, Pierre

    2010-09-13

    A new approach is proposed for the optimum addressing of a metallic nano-antenna (NA) with a free space optical beam. This approach relies on the use of an intermediate resonator structure that provides the appropriate modal conversion of the incoming beam. More precisely, the intermediate resonator consists in a Photonic Crystal (PC) membrane resonant structure that takes benefit of surface addressable slow Bloch modes. First, a phenomenological approach including a deep physical understanding of the NA-PC coupling and its optimization is presented. In a second step, the main features of this analysis are confirmed by numerical simulations (FDTD).

  16. Nested large-eddy simulations of nighttime shear-instability waves and transient warming in a steep valley

    NASA Astrophysics Data System (ADS)

    Zhou, Bowen; Chow, Fotini

    2012-11-01

    This numerical study investigates the nighttime flow dynamics in a steep valley. The Owens Valley in California is highly complex, and represents a challenging terrain for large-eddy simulations (LES). To ensure a faithful representation of the nighttime atmospheric boundary layer (ABL), realistic external boundary conditions are provided through grid nesting. The model obtains initial and lateral boundary conditions from reanalysis data, and bottom boundary conditions from a land-surface model. We demonstrate the ability to extend a mesoscale model to LES resolutions through a systematic grid-nesting framework, achieving accurate simulations of the stable ABL over complex terrain. Nighttime cold-air flow was channeled through a gap on the valley sidewall. The resulting katabatic current induced a cross-valley flow. Directional shear against the down-valley flow in the lower layers of the valley led to breaking Kelvin-Helmholtz waves at the interface, which is captured only on the LES grid. Later that night, the flow transitioned from down-slope to down-valley near the western sidewall, leading to a transient warming episode. Simulation results are verified against field observations and reveal good spatial and temporal precision. Supported by NSF grant ATM-0645784.

  17. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  18. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  19. NIHAO VI. The hidden discs of simulated galaxies

    NASA Astrophysics Data System (ADS)

    Obreja, Aura; Stinson, Gregory S.; Dutton, Aaron A.; Macciò, Andrea V.; Wang, Liang; Kang, Xi

    2016-06-01

    Detailed studies of galaxy formation require clear definitions of the structural components of galaxies. Precisely defined components also enable better comparisons between observations and simulations. We use a subsample of 18 cosmological zoom-in simulations from the Numerical Investigation of a Hundred Astrophysical Objects (NIHAO) project to derive a robust method for defining stellar kinematic discs in galaxies. Our method uses Gaussian Mixture Models in a 3D space of dynamical variables. The NIHAO galaxies have the right stellar mass for their halo mass, and their angular momenta and Sérsic indices match observations. While the photometric disc-to-total ratios are close to 1 for all the simulated galaxies, the kinematic ratios are around ˜0.5. Thus, exponential structure does not imply a cold kinematic disc. Above M* ˜ 109.5 M⊙, the decomposition leads to thin discs and spheroids that have clearly different properties, in terms of angular momentum, rotational support, ellipticity, [Fe/H] and [O/Fe]. At M* ≲ 109.5 M⊙, the decomposition selects discs and spheroids with less distinct properties. At these low masses, both the discs and spheroids have exponential profiles with high minor-to-major axes ratios, I.e. thickened discs.

  20. Thermodynamic analysis of shark skin texture surfaces for microchannel flow

    NASA Astrophysics Data System (ADS)

    Yu, Hai-Yan; Zhang, Hao-Chun; Guo, Yang-Yu; Tan, He-Ping; Li, Yao; Xie, Gong-Nan

    2016-09-01

    The studies of shark skin textured surfaces in flow drag reduction provide inspiration to researchers overcoming technical challenges from actual production application. In this paper, three kinds of infinite parallel plate flow models with microstructure inspired by shark skin were established, namely blade model, wedge model and the smooth model, according to cross-sectional shape of microstructure. Simulation was carried out by using FLUENT, which simplified the computation process associated with direct numeric simulations. To get the best performance from simulation results, shear-stress transport k-omega turbulence model was chosen during the simulation. Since drag reduction mechanism is generally discussed from kinetics point of view, which cannot interpret the cause of these losses directly, a drag reduction rate was established based on the second law of thermodynamics. Considering abrasion and fabrication precision in practical applications, three kinds of abraded geometry models were constructed and tested, and the ideal microstructure was found to achieve best performance suited to manufacturing production on the basis of drag reduction rate. It was also believed that bionic shark skin surfaces with mechanical abrasion may draw more attention from industrial designers and gain wide applications with drag-reducing characteristics.

  1. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  3. A FEM-based method to determine the complex material properties of piezoelectric disks.

    PubMed

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Distribution of Plasmoids in Post-Coronal Mass Ejection Current Sheets

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, A.; Guo, L.; Huang, Y.

    2013-12-01

    Recently, the fragmentation of a current sheet in the high-Lundquist-number regime caused by the plasmoid instability has been proposed as a possible mechanism for fast reconnection. In this work, we investigate this scenario by comparing the distribution of plasmoids obtained from Large Angle and Spectrometric Coronagraph (LASCO) observational data of a coronal mass ejection event with a resistive magnetohydrodynamic simulation of a similar event. The LASCO/C2 data are analyzed using visual inspection, whereas the numerical data are analyzed using both visual inspection and a more precise topological method. Contrasting the observational data with numerical data analyzed with both methods, we identify a major limitation of the visual inspection method, due to the difficulty in resolving smaller plasmoids. This result raises questions about reports of log-normal distributions of plasmoids and other coherent features in the recent literature. Based on nonlinear scaling relations of the plasmoid instability, we infer a lower bound on the current sheet width, assuming the underlying mechanism of current sheet broadening is resistive diffusion.

  5. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  6. Back-support large laser mirror unit: mounting modeling and analysis

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Zhang, Zheng; Long, Kai; Liu, Tianye; Li, Jun; Liu, Changchun; Xiong, Zhao; Yuan, Xiaodong

    2018-01-01

    In high-power laser system, the surface wavefront of large optics has a close link with its structure design and mounting method. The back-support transport mirror design is presently being investigated as a means in China's high-power laser system to hold the optical component firmly while minimizing the distortion of its reflecting surface. We have proposed a comprehensive analytical framework integrated numerical modeling and precise metrology for the mirror's mounting performance evaluation while treating the surface distortion as a key decision variable. The combination of numerical simulation and field tests demonstrates that the comprehensive analytical framework provides a detailed and accurate approach to evaluate the performance of the transport mirror. It is also verified that the back-support transport mirror is effectively compatible with state-of-the-art optical quality specifications. This study will pave the way for future research to solidify the design of back-support large laser optics in China's next generation inertial confinement fusion facility.

  7. Critical exponents of the explosive percolation transition

    NASA Astrophysics Data System (ADS)

    da Costa, R. A.; Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F.

    2014-04-01

    In a new type of percolation phase transition, which was observed in a set of nonequilibrium models, each new connection between vertices is chosen from a number of possibilities by an Achlioptas-like algorithm. This causes preferential merging of small components and delays the emergence of the percolation cluster. First simulations led to a conclusion that a percolation cluster in this irreversible process is born discontinuously, by a discontinuous phase transition, which results in the term "explosive percolation transition." We have shown that this transition is actually continuous (second order) though with an anomalously small critical exponent of the percolation cluster. Here we propose an efficient numerical method enabling us to find the critical exponents and other characteristics of this second-order transition for a representative set of explosive percolation models with different number of choices. The method is based on gluing together the numerical solutions of evolution equations for the cluster size distribution and power-law asymptotics. For each of the models, with high precision, we obtain critical exponents and the critical point.

  8. (3+1)D hydrodynamic simulation of relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schenke, Bjoern; Jeon, Sangyong; Gale, Charles

    2010-07-15

    We present music, an implementation of the Kurganov-Tadmor algorithm for relativistic 3+1 dimensional fluid dynamics in heavy-ion collision scenarios. This Riemann-solver-free, second-order, high-resolution scheme is characterized by a very small numerical viscosity and its ability to treat shocks and discontinuities very well. We also incorporate a sophisticated algorithm for the determination of the freeze-out surface using a three dimensional triangulation of the hypersurface. Implementing a recent lattice based equation of state, we compute p{sub T}-spectra and pseudorapidity distributions for Au+Au collisions at sq root(s)=200 GeV and present results for the anisotropic flow coefficients v{sub 2} and v{sub 4} as amore » function of both p{sub T} and pseudorapidity eta. We were able to determine v{sub 4} with high numerical precision, finding that it does not strongly depend on the choice of initial condition or equation of state.« less

  9. Simulations of incompressible Navier Stokes equations on curved surfaces using discrete exterior calculus

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Mohamed, Mamdouh; Hirani, Anil

    2015-11-01

    We present examples of numerical solutions of incompressible flow on 2D curved domains. The Navier-Stokes equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. A conservative discretization of Navier-Stokes equations on simplicial meshes is developed based on discrete exterior calculus (DEC). The discretization is then carried out by substituting the corresponding discrete operators based on the DEC framework. By construction, the method is conservative in that both the discrete divergence and circulation are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step. Numerical examples include Taylor vortices on a sphere, Stuart vortices on a sphere, and flow past a cylinder on domains with varying curvature. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1401-01.

  10. Structural and mechanical features of the order-disorder transition in experimental hard-sphere packings

    NASA Astrophysics Data System (ADS)

    Hanifpour, M.; Francois, N.; Robins, V.; Kingston, A.; Vaez Allaei, S. M.; Saadatfar, M.

    2015-06-01

    Here we present an experimental and numerical investigation on the grain-scale geometrical and mechanical properties of partially crystallized structures made of macroscopic frictional grains. Crystallization is inevitable in arrangements of monosized hard spheres with packing densities exceeding Bernal's limiting density ϕBernal≈0.64 . We study packings of monosized hard spheres whose density spans over a wide range (0.59 <ϕ <0.72 ) . These experiments harness x-ray computed tomography, three-dimensional image analysis, and numerical simulations to access precisely the geometry and the 3D structure of internal forces within the sphere packings. We show that clear geometrical transitions coincide with modifications of the mechanical backbone of the packing both at the grain and global scale. Notably, two transitions are identified at ϕBernal≈0.64 and ϕc≈0.68 . These results provide insights on how geometrical and mechanical features at the grain scale conspire to yield partially crystallized structures that are mechanically stable.

  11. Continuity and Change in Children's Longitudinal Neural Responses to Numbers

    ERIC Educational Resources Information Center

    Emerson, Robert W.; Cantlon, Jessica F.

    2015-01-01

    Human children possess the ability to approximate numerical quantity nonverbally from a young age. Over the course of early childhood, children develop increasingly precise representations of numerical values, including a symbolic number system that allows them to conceive of numerical information as Arabic numerals or number words. Functional…

  12. Tracker controls development and control architecture for the Hobby-Eberly Telescope Wide Field Upgrade

    NASA Astrophysics Data System (ADS)

    Mock, Jason R.; Beno, Joe; Rafferty, Tom H.; Cornell, Mark E.

    2010-07-01

    To enable the Hobby-Eberly Telescope Wide Field Upgrade, the University of Texas Center for Electromechanics and McDonald Observatory are developing a precision tracker system - a 15,000 kg robot to position a 3,100 kg payload within 10 microns of a desired dynamic track. Performance requirements to meet science needs and safety requirements that emerged from detailed Failure Modes and Effects Analysis resulted in a system of 14 precision controlled actuators and 100 additional analog and digital devices (primarily sensors and safety limit switches). This level of system complexity and emphasis on fail-safe operation is typical of large modern telescopes and numerous industrial applications. Due to this complexity, demanding accuracy requirements, and stringent safety requirements, a highly versatile and easily configurable centralized control system that easily links with modeling and simulation tools during the hardware and software design process was deemed essential. The Matlab/Simulink simulation environment, coupled with dSPACE controller hardware, was selected for controls development and realization. The dSPACE real-time operating system collects sensor information; motor commands are transmitted over a PROFIBUS network to servo amplifiers and drive motor status is received over the same network. Custom designed position feedback loops, supplemented by feed forward force commands for enhanced performance, and algorithms to accommodate self-locking gearboxes (for safety), reside in dSPACE. To interface the dSPACE controller directly to absolute Heidenhain sensors with EnDat 2.2 protocol, a custom communication board was developed. This paper covers details of software and hardware, design choices and analysis, and supporting simulations (primarily Simulink).

  13. High-Speed Rotor Analytical Dynamics on Flexible Foundation Subjected to Internal and External Excitation

    NASA Astrophysics Data System (ADS)

    Jivkov, Venelin S.; Zahariev, Evtim V.

    2016-12-01

    The paper presents a geometrical approach to dynamics simulation of a rigid and flexible system, compiled of high speed rotating machine with eccentricity and considerable inertia and mass. The machine is mounted on a vertical flexible pillar with considerable height. The stiffness and damping of the column, as well as, of the rotor bearings and the shaft are taken into account. Non-stationary vibrations and transitional processes are analyzed. The major frequency and modal mode of the flexible column are used for analytical reduction of its mass, stiffness and damping properties. The rotor and the foundation are modelled as rigid bodies, while the flexibility of the bearings is estimated by experiments and the requirements of the manufacturer. The transition effects as a result of limited power are analyzed by asymptotic methods of averaging. Analytical expressions for the amplitudes and unstable vibrations throughout resonance are derived by quasi-static approach increasing and decreasing of the exciting frequency. Analytical functions give the possibility to analyze the influence of the design parameter of many structure applications as wind power generators, gas turbines, turbo-generators, and etc. A numerical procedure is applied to verify the effectiveness and precision of the simulation process. Nonlinear and transitional effects are analyzed and compared to the analytical results. External excitations, as wave propagation and earthquakes, are discussed. Finite elements in relative and absolute coordinates are applied to model the flexible column and the high speed rotating machine. Generalized Newton - Euler dynamics equations are used to derive the precise dynamics equations. Examples of simulation of the system vibrations and nonstationary behaviour are presented.

  14. Thermostatted delta f

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krommes, J.A.

    2000-01-18

    The delta f simulation method is revisited. Statistical coarse-graining is used to rigorously derive the equation for the fluctuation delta f in the particle distribution. It is argued that completely collisionless simulation is incompatible with the achievement of true statistically steady states with nonzero turbulent fluxes because the variance of the particle weights w grows with time. To ensure such steady states, it is shown that for dynamically collisionless situations a generalized thermostat or W-stat may be used in lieu of a full collision operator to absorb the flow of entropy to unresolved fine scales in velocity space. The simplestmore » W-stat can be implemented as a self-consistently determined, time-dependent damping applied to w. A precise kinematic analogy to thermostatted nonequilibrium molecular dynamics (NEMD) is pointed out, and the justification of W-stats for simulations of turbulence is discussed. An extrapolation procedure is proposed such that the long-time, steady-state, collisionless flux can be deduced from several short W-statted runs with large effective collisionality, and a numerical demonstration is given.« less

  15. Research on pressure tactile sensing technology based on fiber Bragg grating array

    NASA Astrophysics Data System (ADS)

    Song, Jinxue; Jiang, Qi; Huang, Yuanyang; Li, Yibin; Jia, Yuxi; Rong, Xuewen; Song, Rui; Liu, Hongbin

    2015-09-01

    A pressure tactile sensor based on the fiber Bragg grating (FBG) array is introduced in this paper, and the numerical simulation of its elastic body was implemented by finite element software (ANSYS). On the basis of simulation, fiber Bragg grating strings were implanted in flexible silicone to realize the sensor fabrication process, and a testing system was built. A series of calibration tests were done via the high precision universal press machine. The tactile sensor array perceived external pressure, which is demodulated by the fiber grating demodulation instrument, and three-dimension pictures were programmed to display visually the position and size. At the same time, a dynamic contact experiment of the sensor was conducted for simulating robot encountering other objects in the unknown environment. The experimental results show that the sensor has good linearity, repeatability, and has the good effect of dynamic response, and its pressure sensitivity was 0.03 nm/N. In addition, the sensor also has advantages of anti-electromagnetic interference, good flexibility, simple structure, low cost and so on, which is expected to be used in the wearable artificial skin in the future.

  16. Aerodynamic analysis of an isolated vehicle wheel

    NASA Astrophysics Data System (ADS)

    Leśniewicz, P.; Kulak, M.; Karczewski, M.

    2014-08-01

    Increasing fuel prices force the manufacturers to look into all aspects of car aerodynamics including wheels, tyres and rims in order to minimize their drag. By diminishing the aerodynamic drag of vehicle the fuel consumption will decrease, while driving safety and comfort will improve. In order to properly illustrate the impact of a rotating wheel aerodynamics on the car body, precise analysis of an isolated wheel should be performed beforehand. In order to represent wheel rotation in contact with the ground, presented CFD simulations included Moving Wall boundary as well as Multiple Reference Frame should be performed. Sliding mesh approach is favoured but too costly at the moment. Global and local flow quantities obtained during simulations were compared to an experiment in order to assess the validity of the numerical model. Results of investigation illustrates dependency between type of simulation and coefficients (drag and lift). MRF approach proved to be a better solution giving result closer to experiment. Investigation of the model with contact area between the wheel and the ground helps to illustrate the impact of rotating wheel aerodynamics on the car body.

  17. A 1D radiative transfer benchmark with polarization via doubling and adding

    NASA Astrophysics Data System (ADS)

    Ganapol, B. D.

    2017-11-01

    Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.

  18. Mountain bicycle frame testing as an example of practical implementation of hybrid simulation using RTFEM

    NASA Astrophysics Data System (ADS)

    Mucha, Waldemar; Kuś, Wacław

    2018-01-01

    The paper presents a practical implementation of hybrid simulation using Real Time Finite Element Method (RTFEM). Hybrid simulation is a technique for investigating dynamic material and structural properties of mechanical systems by performing numerical analysis and experiment at the same time. It applies to mechanical systems with elements too difficult or impossible to model numerically. These elements are tested experimentally, while the rest of the system is simulated numerically. Data between the experiment and numerical simulation are exchanged in real time. Authors use Finite Element Method to perform the numerical simulation. The following paper presents the general algorithm for hybrid simulation using RTFEM and possible improvements of the algorithm for computation time reduction developed by the authors. The paper focuses on practical implementation of presented methods, which involves testing of a mountain bicycle frame, where the shock absorber is tested experimentally while the rest of the frame is simulated numerically.

  19. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  20. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  1. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  2. Preparation and Integration of ALHAT Precision Landing Technology for Morpheus Flight Testing

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Robertson, Edward A.; Pierrottet, Diego F.; Roback, Vincent E.; Trawny, Nikolas; Devolites, Jennifer L.; Hart, Jeremy J.; Estes, Jay N.; Gaddis, Gregory S.

    2014-01-01

    The Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project has developed a suite of prototype sensors for enabling autonomous and safe precision land- ing of robotic or crewed vehicles on solid solar bodies under varying terrain lighting condi- tions. The sensors include a Lidar-based Hazard Detection System (HDS), a multipurpose Navigation Doppler Lidar (NDL), and a long-range Laser Altimeter (LAlt). Preparation for terrestrial ight testing of ALHAT onboard the Morpheus free- ying, rocket-propelled ight test vehicle has been in progress since 2012, with ight tests over a lunar-like ter- rain eld occurring in Spring 2014. Signi cant work e orts within both the ALHAT and Morpheus projects has been required in the preparation of the sensors, vehicle, and test facilities for interfacing, integrating and verifying overall system performance to ensure readiness for ight testing. The ALHAT sensors have undergone numerous stand-alone sensor tests, simulations, and calibrations, along with integrated-system tests in special- ized gantries, trucks, helicopters and xed-wing aircraft. A lunar-like terrain environment was constructed for ALHAT system testing during Morpheus ights, and vibration and thermal testing of the ALHAT sensors was performed based on Morpheus ights prior to ALHAT integration. High- delity simulations were implemented to gain insight into integrated ALHAT sensors and Morpheus GN&C system performance, and command and telemetry interfacing and functional testing was conducted once the ALHAT sensors and electronics were integrated onto Morpheus. This paper captures some of the details and lessons learned in the planning, preparation and integration of the individual ALHAT sen- sors, the vehicle, and the test environment that led up to the joint ight tests.

  3. Precision shock tuning on the national ignition facility.

    PubMed

    Robey, H F; Celliers, P M; Kline, J L; Mackinnon, A J; Boehly, T R; Landen, O L; Eggert, J H; Hicks, D; Le Pape, S; Farley, D R; Bowers, M W; Krauter, K G; Munro, D H; Jones, O S; Milovich, J L; Clark, D; Spears, B K; Town, R P J; Haan, S W; Dixit, S; Schneider, M B; Dewald, E L; Widmann, K; Moody, J D; Döppner, T D; Radousky, H B; Nikroo, A; Kroll, J J; Hamza, A V; Horner, J B; Bhandarkar, S D; Dzenitis, E; Alger, E; Giraldez, E; Castro, C; Moreno, K; Haynam, C; LaFortune, K N; Widmayer, C; Shaw, M; Jancaitis, K; Parham, T; Holunga, D M; Walters, C F; Haid, B; Malsbury, T; Trummer, D; Coffee, K R; Burr, B; Berzins, L V; Choate, C; Brereton, S J; Azevedo, S; Chandrasekaran, H; Glenzer, S; Caggiano, J A; Knauer, J P; Frenje, J A; Casey, D T; Johnson, M Gatu; Séguin, F H; Young, B K; Edwards, M J; Van Wonterghem, B M; Kilkenny, J; MacGowan, B J; Atherton, J; Lindl, J D; Meyerhofer, D D; Moses, E

    2012-05-25

    Ignition implosions on the National Ignition Facility [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)] are underway with the goal of compressing deuterium-tritium fuel to a sufficiently high areal density (ρR) to sustain a self-propagating burn wave required for fusion power gain greater than unity. These implosions are driven with a very carefully tailored sequence of four shock waves that must be timed to very high precision to keep the fuel entropy and adiabat low and ρR high. The first series of precision tuning experiments on the National Ignition Facility, which use optical diagnostics to directly measure the strength and timing of all four shocks inside a hohlraum-driven, cryogenic liquid-deuterium-filled capsule interior have now been performed. The results of these experiments are presented demonstrating a significant decrease in adiabat over previously untuned implosions. The impact of the improved shock timing is confirmed in related deuterium-tritium layered capsule implosions, which show the highest fuel compression (ρR~1.0 g/cm(2)) measured to date, exceeding the previous record [V. Goncharov et al., Phys. Rev. Lett. 104, 165001 (2010)] by more than a factor of 3. The experiments also clearly reveal an issue with the 4th shock velocity, which is observed to be 20% slower than predictions from numerical simulation.

  4. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  5. High precision NC lathe feeding system rigid-flexible coupling model reduction technology

    NASA Astrophysics Data System (ADS)

    Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai

    2017-08-01

    This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.

  6. Performance enhancement of fin attached ice-on-coil type thermal storage tank for different fin orientations using constrained and unconstrained simulations

    NASA Astrophysics Data System (ADS)

    Kim, M. H.; Duong, X. Q.; Chung, J. D.

    2017-03-01

    One of the drawbacks in latent thermal energy storage system is the slow charging and discharging time due to the low thermal conductivity of the phase change materials (PCM). This study numerically investigated the PCM melting process inside a finned tube to determine enhanced heat transfer performance. The influences of fin length and fin numbers were investigated. Also, two different fin orientations, a vertical and horizontal type, were examined, using two different simulation methods, constrained and unconstrained. The unconstrained simulation, which considers the density difference between the solid and liquid PCM showed approximately 40 % faster melting rate than that of constrained simulation. For a precise estimation of discharging performance, unconstrained simulation is essential. Thermal instability was found in the liquid layer below the solid PCM, which is contrary to the linear stability theory, due to the strong convection driven by heat flux from the coil wall. As the fin length increases, the area affected by the fin becomes larger, thus the discharging time becomes shorter. The discharging performance also increased as the fin number increased, but the enhancement of discharging performance by more than two fins was not discernible. The horizontal type shortened the complete melting time by approximately 10 % compared to the vertical type.

  7. WRF model for precipitation simulation and its application in real-time flood forecasting in the Jinshajiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Hairong; Zhang, Jianyun; Zeng, Xiaofan; Ye, Lei; Liu, Yi; Tayyab, Muhammad; Chen, Yufan

    2017-07-01

    An accurate flood forecasting with long lead time can be of great value for flood prevention and utilization. This paper develops a one-way coupled hydro-meteorological modeling system consisting of the mesoscale numerical weather model Weather Research and Forecasting (WRF) model and the Chinese Xinanjiang hydrological model to extend flood forecasting lead time in the Jinshajiang River Basin, which is the largest hydropower base in China. Focusing on four typical precipitation events includes: first, the combinations and mode structures of parameterization schemes of WRF suitable for simulating precipitation in the Jinshajiang River Basin were investigated. Then, the Xinanjiang model was established after calibration and validation to make up the hydro-meteorological system. It was found that the selection of the cloud microphysics scheme and boundary layer scheme has a great impact on precipitation simulation, and only a proper combination of the two schemes could yield accurate simulation effects in the Jinshajiang River Basin and the hydro-meteorological system can provide instructive flood forecasts with long lead time. On the whole, the one-way coupled hydro-meteorological model could be used for precipitation simulation and flood prediction in the Jinshajiang River Basin because of its relatively high precision and long lead time.

  8. Inorganic Chlorine Partitioning in the Summer Lower Stratosphere: Modeled and Measured [ClONO2/HCl] During POLARIS

    NASA Technical Reports Server (NTRS)

    Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.

    2001-01-01

    We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.

  9. Optimized efficient liver T1ρ mapping using limited spin lock times

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Zhao, Feng; Griffith, James F.; Chan, Queenie; Wang, Yi-Xiang J.

    2012-03-01

    T1ρ relaxation has recently been found to be sensitive to liver fibrosis and has potential to be used for early detection of liver fibrosis and grading. Liver T1ρ imaging and accurate mapping are challenging because of the long scan time, respiration motion and high specific absorption rate. Reduction and optimization of spin lock times (TSLs) are an efficient way to reduce scan time and radiofrequency energy deposition of T1ρ imaging, but maintain the near-optimal precision of T1ρ mapping. This work analyzes the precision in T1ρ estimation with limited, in particular two, spin lock times, and explores the feasibility of using two specific operator-selected TSLs for efficient and accurate liver T1ρ mapping. Two optimized TSLs were derived by theoretical analysis and numerical simulations first, and tested experimentally by in vivo rat liver T1ρ imaging at 3 T. The simulation showed that the TSLs of 1 and 50 ms gave optimal T1ρ estimation in a range of 10-100 ms. In the experiment, no significant statistical difference was found between the T1ρ maps generated using the optimized two-TSL combination and the maps generated using the six TSLs of [1, 10, 20, 30, 40, 50] ms according to one-way ANOVA analysis (p = 0.1364 for liver and p = 0.8708 for muscle).

  10. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  11. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  12. Numerical simulation of thermal disposition with induction heating used for oncological hyperthermic treatment.

    PubMed

    Dughiero, F; Corazza, S

    2005-01-01

    Hyperthermia plays an important role in oncological therapies, most often being used in combination with radiotherapy, chemotherapy and immunotherapy. The success of this therapy is strongly dependent on the precision and control of thermal deposition. Hyperthermia based on induction heating, with thermally self-regulating thermoseeds inserted into the tumorous mass, is used for interstitial treatment. The technique was the subject of the numerical study presented in the paper. The analysis was carried out using coupled electromagnetic heating and thermo-fluid dynamic FEM simulations. During thermal deposition by induction heating of inserted seeds, the simulations estimated the thermal field inside and outside the tumour, as well as the sensitivity of the thermal field to variations regarding seed temperature, configuration and proximity to vessels. The method, for which accurate anatomical patient's information is essential, is suitable for providing useful qualitative and quantitative information about thermal transients and power density distribution for hyperthermic treatment. Several grid steps were analysed and compared. A 1 cm seed grid was resulted in good homogeneity and effectiveness of the thermal deposition. The cold spot effect caused by large vessels was demonstrated and quantified. Simulations of the heating of a tumorous mass in the liver showed that an indcutor generator operating at 200 kHz frequency and 500 A current, producing a pulsating magnetic field of H = 60 A cm(-1), was adequate for the treatment. The seeds that perform best among those tested (Nicu (28% Cu), PdNi (27.2% Ni), PdCo (6.15% Co) and ferrite core) were the PdNi (1 mm radius, 10 mm length), as they have a low Curie temperature (52 degrees C), which is the closest to the desired treatment temperature and thus reduces the risk of hot spots.

  13. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    NASA Astrophysics Data System (ADS)

    Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter

    2012-08-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.

  14. Less precise representation of numerical magnitude in high math-anxious individuals: an ERP study of the size and distance effects.

    PubMed

    Núñez-Peña, M Isabel; Suárez-Pellicioni, Macarena

    2014-12-01

    Numerical comparison tasks are widely used to study the mental representation of numerical magnitude. In study, event-related brain potentials (ERPs) were recorded while 26 high math-anxious (HMA) and 27 low math-anxious (LMA) individuals were presented with pairs of single-digit Arabic numbers and were asked to decide which one had the larger numerical magnitude. The size of the numbers and the distance between them were manipulated in order to study the size and the distance effects. The results showed that both distance and size effects were larger for the HMA group. As for ERPs, results showed that the ERP distance effect had larger amplitude for both the size and distance effects in the HMA group than among their LMA counterparts. Since this component has been taken as a marker of the processing of numerical magnitude, this result suggests that HMA individuals have a less precise representation of numerical magnitude. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Constraining the Intergalactic and Circumgalactic Media with Lyman-Alpha Absorption

    NASA Astrophysics Data System (ADS)

    Sorini, Daniele; Onorbe, Jose; Hennawi, Joseph F.; Lukic, Zarija

    2018-01-01

    Lyman-alpha (Ly-a) absorption features detected in quasar spectra in the redshift range 02Mpc, the simulations asymptotically match the observations, because the ΛCDM model successfully describes the ambient IGM. This represents a critical advantage of studying the mean absorption profile. However, significant differences between the simulations, and between simulations and observations are present on scales 20kpc-2Mpc, illustrating the challenges of accurately modeling and resolving galaxy formation physics. It is noteworthy that these differences are observed as far out as ~2Mpc, indicating that the `sphere-of-influence' of galaxies could extend to approximately ~20 times the halo virial radius (~100kpc). Current observations are very precise on these scales and can thus strongly discriminate between different galaxy formation models. I demonstrate that the Ly-a absorption profile is primarily sensitive to the underlying temperature-density relationship of diffuse gas around galaxies, and argue that it thus provides a fundamental test of galaxy formation models. With near-future high-precision observations of Ly-a absorption, the tools developed in my thesis set the stage for even stronger constraints on models of galaxy formation and cosmology.

  16. Analysis of form deviation in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Kreilkamp, H.; Grunwald, T.; Dambon, O.; Klocke, F.

    2018-02-01

    Especially in the market of sensors, LED lighting and medical technologies, there is a growing demand for precise yet low-cost glass optics. This demand poses a major challenge for glass manufacturers who are confronted with the challenge arising from the trend towards ever-higher levels of precision combined with immense pressure on market prices. Since current manufacturing technologies especially grinding and polishing as well as Precision Glass Molding (PGM) are not able to achieve the desired production costs, glass manufacturers are looking for alternative technologies. Non-isothermal Glass Molding (NGM) has been shown to have a big potential for low-cost mass manufacturing of complex glass optics. However, the biggest drawback of this technology at the moment is the limited accuracy of the manufactured glass optics. This research is addressing the specific challenges of non-isothermal glass molding with respect to form deviation of molded glass optics. Based on empirical models, the influencing factors on form deviation in particular form accuracy, waviness and surface roughness will be discussed. A comparison with traditional isothermal glass molding processes (PGM) will point out the specific challenges of non-isothermal process conditions. Furthermore, the underlying physical principle leading to the formation of form deviations will be analyzed in detail with the help of numerical simulation. In this way, this research contributes to a better understanding of form deviations in non-isothermal glass molding and is an important step towards new applications demanding precise yet low-cost glass optics.

  17. Energy conserving numerical methods for the computation of complex vortical flows

    NASA Astrophysics Data System (ADS)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our simulations still required meshes containing tens of millions of degrees of freedom. Such computations can only be done in reasonable amounts of time by spreading the work on multiple CPUs via domain decomposition. Further speed-up efforts were made by implementing a version of the code for GPUs using Nvidia's CUDA programming language. Finally we searched for optimal wing motions by coupling our computational fluid dynamics code with the optimization package SNOPT. The wing motion was parameterized by a few angles describing the local curvature and the twisting of the wing. These were expressed in terms of truncated Fourier series, the Fourier coefficients being our optimization parameters. With this approach we were able to obtain propulsive efficiencies of around 50% (thrust power/power input).

  18. Isotropic three-dimensional T2 mapping of knee cartilage: Development and validation.

    PubMed

    Colotti, Roberto; Omoumi, Patrick; Bonanno, Gabriele; Ledoux, Jean-Baptiste; van Heeswijk, Ruud B

    2018-02-01

    1) To implement a higher-resolution isotropic 3D T 2 mapping technique that uses sequential T 2 -prepared segmented gradient-recalled echo (Iso3DGRE) images for knee cartilage evaluation, and 2) to validate it both in vitro and in vivo in healthy volunteers and patients with knee osteoarthritis. The Iso3DGRE sequence with an isotropic 0.6 mm spatial resolution was developed on a clinical 3T MR scanner. Numerical simulations were performed to optimize the pulse sequence parameters. A phantom study was performed to validate the T 2 estimation accuracy. The repeatability of the sequence was assessed in healthy volunteers (n = 7). T 2 values were compared with those from a clinical standard 2D multislice multiecho (MSME) T 2 mapping sequence in knees of healthy volunteers (n = 13) and in patients with knee osteoarthritis (OA, n = 5). The numerical simulations resulted in 100 excitations per segment and an optimal radiofrequency (RF) excitation angle of 15°. The phantom study demonstrated a good correlation of the technique with the reference standard (slope 0.9 ± 0.05, intercept 0.2 ± 1.7 msec, R 2 ≥ 0.99). Repeated measurements of cartilage T 2 values in healthy volunteers showed a coefficient of variation of 5.6%. Both Iso3DGRE and MSME techniques found significantly higher cartilage T 2 values (P < 0.03) in OA patients. Iso3DGRE precision was equal to that of the MSME T 2 mapping in healthy volunteers, and significantly higher in OA (P = 0.01). This study successfully demonstrated that high-resolution isotropic 3D T 2 mapping for knee cartilage characterization is feasible, accurate, repeatable, and precise. The technique allows for multiplanar reformatting and thus T 2 quantification in any plane of interest. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:362-371. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Technologies for Future Precision Strike Missile Systems (les Technologies des futurs systemes de missiles pour frappe de precision)

    DTIC Science & Technology

    2001-07-01

    hardware - in - loop (HWL) simulation is also developed...Firings / Engine Tests Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model...Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model Model

  20. A highly precise frequency-based method for estimating the tension of an inclined cable with unknown boundary conditions

    NASA Astrophysics Data System (ADS)

    Ma, Lin

    2017-11-01

    This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.

  1. Computing Generalized Matrix Inverse on Spiking Neural Substrate.

    PubMed

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  2. Effects of Boron and Graphite Uncertainty in Fuel for TREAT Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, Kyle; Mausolff, Zander; Gonzalez, Esteban

    Advanced modeling techniques and current computational capacity make full core TREAT simulations possible, with the goal of such simulations to understand the pre-test core and minimize the number of required calibrations. But, in order to simulate TREAT with a high degree of precision the reactor materials and geometry must also be modeled with a high degree of precision. This paper examines how uncertainty in the reported values of boron and graphite have an effect on simulations of TREAT.

  3. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising ferromagnet is studied, which is especially useful since it serves as a prototype for more complicated disordered systems such as the random field Ising model and spin glasses. We investigate the effect that changing boundary spins has on the locations of domain walls in the interior of the random ferromagnet system. We provide an analytic proof that ground state domain walls in the two dimensional system are decomposable, and we map these domain walls to a shortest paths problem. By implementing a multiple-source shortest paths algorithm developed by Philip Klein, we are able to efficiently probe domain wall locations for all possible configurations of boundary spins. We consider lattices with uncorrelated dis- order, as well as disorder that is spatially correlated according to a power law. We present numerical results for the scaling exponent governing the probability that a domain wall can be induced that passes through a particular location in the system's interior, and we compare these results to previous results on the directed polymer problem.

  4. Analyses of internal tides generation and propagation over a Gaussian ridge in laboratory and numerical experiments

    NASA Astrophysics Data System (ADS)

    Dossmann, Yvan; Paci, Alexandre; Auclair, Francis; Floor, Jochem

    2010-05-01

    Internal tides are suggested to play a major role in the sustaining of the global oceanic circulation [1][5]. Although the exact origin of the energy conversions occurring in stratified fluids is questioned [2], it is clear that the diapycnal energy transfers provided by the energy cascade of internal gravity waves generated at tidal frequencies in regions of steep bathymetry is strongly linked to the general circulation energy balance. Therefore a precise quantification of the energy supply by internal waves is a crucial step in forecasting climate, since it improves our understanding of the underlying physical processes. We focus on an academic case of internal waves generated over an oceanic ridge in a linearly stratified fluid. In order to accurately quantify the diapycnal energy transfers caused by internal waves dynamics, we adopt a complementary approach involving both laboratory and numerical experiments. The laboratory experiments are conducted in a 4m long tank of the CNRM-GAME fluid mechanics laboratory, well known for its large stratified water flume (e.g. Knigge et al [3]). The horizontal oscillation at precisely controlled frequency of a Gaussian ridge immersed in a linearly stratified fluid generates internal gravity waves. The ridge of e-folding width 3.6 cm is 10 cm high and spans 50 cm. We use PIV and Synthetic Schlieren measurement techniques, to retrieve the high resolution velocity and stratification anomaly fields in the 2D vertical plane across the ridge. These experiments allow us to get access to real and exhaustive measurements of a wide range of internal waves regimes by varying the precisely controlled experimental parameters. To complete this work, we carry out some direct numerical simulations with the same parameters (forcing amplitude and frequency, initial stratification, boundary conditions) as the laboratory experiments. The model used is a non-hydrostatic version of the numerical model Symphonie [4]. Our purpose is not only to test the dynamics and energetics of the numerical model, but also to advance the analysis based on combined wavelet and empirical orthogonal function. In particular, we focus on the study of the transient regime of internal wave generation near the ridge. Our analyses of the experimental fields show that, for fixed background stratification and topography, the evolution of the stratification anomaly strongly depends on the forcing frequency. The duration of the transient regime, as well as the amplitude reached in the stationary state vary significantly with the parameter ω/N (where ω is the forcing frequency, and N is the background Brunt-Väisälä frequency). We also observe that, for particular forcing frequencies, for which the ridge slope matches the critical slope of the first harmonic mode, internal waves are excited both at the fundamental and the first harmonic frequency. Associated energy transfers are finally evaluated both experimentally and numerically, enabling us to highlight the similarities and discrepancies between the laboratory experiments and the numerical simulations. References [1] Munk W. and C. Wunsch (1998): Abyssal recipes II: energetics of tidal and wind mixing Deep-Sea Res. 45, 1977-2010 [2] Tailleux R. (2009): On the energetics of stratified turbulent mixing, irreversible thermodynamics, Boussinesq models and the ocean heat engine controversy, J. Fluid Mech. 638, 339-382 [3] Knigge C., D. Etling, A. Paci and O. Eiff (2010): Laboratory experiments on mountain-induced rotors, Quarterly Journal of the Royal Meteorological Society, in press. [4] Auclair F., C. Estournel, J. Floor, C. N'Guyen and P. Marsaleix, (2009): A non-hydrostatic, energy conserving algorithm for regional ocean modelling. Under revision. [5] Wunsch, C. & R. Ferrari (2004): Vertical mixing, energy and the general circulation of the oceans. Annu. Rev. Fluid Mech., 36:281-314.

  5. 2006 - 2016: Ten Years Of Tsunami In French Polynesia

    NASA Astrophysics Data System (ADS)

    Reymond, D.; Jamelot, A.; Hyvernaud, O.

    2016-12-01

    Located in South central Pacific and despite of its far field situation, the French Polynesia is very much concerned by the tsunamis generated along the major subduction zones located around the Pacific. At the time of writing, 10 tsunamis have been generated in the Pacific Ocean since 2006; all these events recorded in French Polynesia, produced different levels of warning, starting from a simple seismic warning with an information bulletin, up to an effective tsunami warning with evacuation of the coastal zone. These tsunamigenic events represent an invaluable opportunity of evolutions and tests of the tsunami warning system developed in French Polynesia: during the last ten years, the warning rules had evolved from a simple criterion of magnitudes up to the computation of the main seismic source parameters (location, slowness determinant (Newman & Okal, 1998) and focal geometry) using two independent methods: the first one uses an inversion of W-phases (Kanamori & Rivera, 2012) and the second one performs an inversion of long period surface waves (Clément & Reymond, 2014); the source parameters such estimated allow to compute in near real time the expected distributions of tsunami heights (with the help of a super-computer and parallelized codes of numerical simulations). Furthermore, two kinds of numerical modeling are used: the first one, very rapid (performed in about 5minutes of computation time) is based on the Green's law (Jamelot & Reymond, 2015), and a more detailed and precise one that uses classical numerical simulations through nested grids (about 45 minutes of computation time). Consequently, the criteria of tsunami warning are presently based on the expected tsunami heights in the different archipelagos and islands of French Polynesia. This major evolution allows to differentiate and use different levels of warning for the different archipelagos,working in tandem with the Civil Defense. We present the comparison of the historical observed tsunami heights (instrumental records, including deep ocean measurements provided by DART buoys and measured of tsunamis run-up) to the computed ones. In addition, the sites known for their amplification and resonance effects are well reproduced by the numerical simulations.

  6. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.

  7. Numerical Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of historical and current numerical aerodynamic simulation (NAS) is given. The capabilities and goals of the Numerical Aerodynamic Simulation Facility are outlined. Emphasis is given to numerical flow visualization and its applications to structural analysis of aircraft and spacecraft bodies. The uses of NAS in computational chemistry, engine design, and galactic evolution are mentioned.

  8. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  9. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  10. Rapid inundation estimates using coastal amplification laws in the western Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Gailler, Audrey; Loevenbruck, Anne; Hébert, Hélène

    2014-05-01

    Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake events and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).

  11. Ion Thruster Discharge Performance Per Magnetic Field Topography

    NASA Technical Reports Server (NTRS)

    Wirz, Richard E.; Goebel, Dan

    2006-01-01

    DC-ION is a detailed computational model for predicting the plasma characteristics of rain-cusp ion thrusters. The advanced magnetic field meshing algorithm used by DC-ION allows precise treatment of the secondary electron flow. This capability allows self-consistent estimates of plasma potential that improves the overall consistency of the results of the discharge model described in Reference [refJPC05mod1]. Plasma potential estimates allow the model to predict the onset of plasma instabilities, and important shortcoming of the previous model for optimizing the design of discharge chambers. A magnetic field mesh simplifies the plasma flow calculations, for both the ions and the secondary electrons, and significantly reduces numerical diffusion that can occur with meshes not aligned with the magnetic field. Comparing the results of this model to experimental data shows that the behavior of the primary electrons, and the precise manner of their confinement, dictates the fundamental efficiency of ring-cusp. This correlation is evident in simulations of the conventionally sized NSTAR thruster (30 cm diameter) and the miniature MiXI thruster (3 cm diameter).

  12. Precise starshade stationkeeping and pointing with a Zernike wavefront sensor

    NASA Astrophysics Data System (ADS)

    Bottom, Michael; Martin, Stefan; Seubert, Carl; Cady, Eric; Zareh, Shannon Kian; Shaklan, Stuart

    2017-09-01

    Starshades, large occulters positioned tens of thousands of kilometers in front of space telescopes, offer one of the few paths to imaging and characterizing Earth-like extrasolar planets. However, for a starshade to generate a sufficiently dark shadow on the telescope, the two must be coaligned to just 1 meter laterally, even at these large separations. The principal challenge to achieving this level of control is in determining the position of the starshade with respect to the space telescope. In this paper, we present numerical simulations and laboratory results demonstrating that a Zernike wavefront sensor coupled to a WFIRST-type telescope is able to deliver the stationkeeping precision required, by measuring light outside of the science wavelengths. The sensor can determine the starshade lateral position to centimeter level in seconds of open shutter time for stars brighter than eighth magnitude, with a capture range of 10 meters. We discuss the potential for fast (ms) tip/tilt pointing control at the milli-arcsecond level by illuminating the sensor with a laser mounted on the starshade. Finally, we present early laboratory results.

  13. Precise MS light-quark masses from lattice QCD in the regularization invariant symmetric momentum-subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbahn, Martin; Jaeger, Sebastian; Department of Physics and Astronomy, University of Sussex, Falmer, Brighton BN1 9QH

    2010-12-01

    We compute the conversion factors needed to obtain the MS and renormalization-group-invariant (RGI) up, down, and strange quark masses at next-to-next-to-leading order from the corresponding parameters renormalized in the recently proposed RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }renormalization schemes. This is important for obtaining the MS masses with the best possible precision from numerical lattice QCD simulations, because the customary RI{sup (')}/MOM scheme is afflicted with large irreducible uncertainties both on the lattice and in perturbation theory. We find that the smallness of the known one-loop matching coefficients is accompanied by even smaller two-loop contributions. From a study of residual scalemore » dependences, we estimate the resulting perturbative uncertainty on the light-quark masses to be about 2% in the RI/SMOM scheme and about 3% in the RI/SMOM{sub {gamma}{sub {mu}} }scheme. Our conversion factors are given in fully analytic form, for general covariant gauge and renormalization point. We provide expressions for the associated anomalous dimensions.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Shaohua; School of Automation, Chongqing University, Chongqing 400044; Sun, Quanping

    This paper addresses chaos control of the micro-electro- mechanical resonator by using adaptive dynamic surface technology with extended state observer. To reveal the mechanism of the micro- electro-mechanical resonator, the phase diagrams and corresponding time histories are given to research the nonlinear dynamics and chaotic behavior, and Homoclinic and heteroclinic chaos which relate closely with the appearance of chaos are presented based on the potential function. To eliminate the effect of chaos, an adaptive dynamic surface control scheme with extended state observer is designed to convert random motion into regular motion without precise system model parameters and measured variables. Puttingmore » tracking differentiator into chaos controller solves the ‘explosion of complexity’ of backstepping and poor precision of the first-order filters. Meanwhile, to obtain high performance, a neural network with adaptive law is employed to approximate unknown nonlinear function in the process of controller design. The boundedness of all the signals of the closed-loop system is proved in theoretical analysis. Finally, numerical simulations are executed and extensive results illustrate effectiveness and robustness of the proposed scheme.« less

  15. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  16. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  17. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  18. Precise Determination of the Baseline Between the TerraSAR-X and TanDEM-X Satellites

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Rothacher, Markus; Michalak, Grzegorz; Moon, Yongjin

    TerraSAR-X, launched on June 15, 2007, and TanDEM-X, to be launched in September 2009, both carry the Tracking, Occultation and Ranging (TOR) category A payload instrument package. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), for precise orbit determination and atmospheric sounding and a Laser retro-reflector (LRR) serving as target for the global Satellite Laser Ranging (SLR) ground station network. The TOR is supplied by the GeoForschungsZentrum Potsdam (GFZ) Germany, and the Center for Space Research (CSR), Austin, Texas. The objective of the German/US collaboration is twofold: provision of atmospheric profiles for use in numerical weather predictions and climate studies from the occultation data and precision SAR data processing based on precise orbits and atmospheric products. For the scientific objectives of the TanDEM- X mission, i.e., bi-static SAR together with TerraSAR-X, the dual-frequency GPS receiver is of vital importance for the millimeter level determination of the baseline or distance between the two spacecrafts. The paper discusses the feasibility of generating millimeter baselines by the example of GRACE, where for validation the distance between the two GRACE satellites is directly available from the micrometer-level intersatellite link measurements. The distance of the GRACE satellites is some 200 km, the distance of the TerraSAR-X/TanDEM-X formation will be some 200 meters. Therefore the proposed approach is then subject to a simulation of the foreseen TerraSAR-X/TanDEM-X formation. The effect of varying space environmental conditions, of possible phase center variations, multi path, and of varying center of mass of the spacecrafts are evaluated and discussed.

  19. Fabrication of micro-lens array on convex surface by meaning of micro-milling

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Du, Yunlong; Wang, Bo; Shan, Debin

    2014-08-01

    In order to develop the application of the micro-milling technology, and to fabricate ultra-precision optical surface with complex microstructure, in this paper, the primary experimental research on micro-milling complex microstructure array is carried out. A complex microstructure array surface with vary parameters is designed, and the mathematic model of the surface is set up and simulated. For the fabrication of the designed microstructure array surface, a micro three-axis ultra-precision milling machine tool is developed, aerostatic guideway drove directly by linear motor is adopted in order to guarantee the enough stiffness of the machine, and novel numerical control strategy with linear encoders of 5nm resolution used as the feedback of the control system is employed to ensure the extremely high motion control accuracy. With the help of CAD/CAM technology, convex micro lens array on convex spherical surface with different scales on material of polyvinyl chloride (PVC) and pure copper is fabricated using micro tungsten carbide ball end milling tool based on the ultra-precision micro-milling machine. Excellent nanometer-level micro-movement performance of the axis is proved by motion control experiment. The fabrication is nearly as the same as the design, the characteristic scale of the microstructure is less than 200μm and the accuracy is better than 1μm. It prove that ultra-precision micro-milling technology based on micro ultra-precision machine tool is a suitable and optional method for micro manufacture of microstructure array surface on different kinds of materials, and with the development of micro milling cutter, ultraprecision micro-milling complex microstructure surface will be achieved in future.

  20. Tensor network simulation of QED on infinite lattices: Learning from (1 +1 ) d , and prospects for (2 +1 ) d

    NASA Astrophysics Data System (ADS)

    Zapp, Kai; Orús, Román

    2017-06-01

    The simulation of lattice gauge theories with tensor network (TN) methods is becoming increasingly fruitful. The vision is that such methods will, eventually, be used to simulate theories in (3 +1 ) dimensions in regimes difficult for other methods. So far, however, TN methods have mostly simulated lattice gauge theories in (1 +1 ) dimensions. The aim of this paper is to explore the simulation of quantum electrodynamics (QED) on infinite lattices with TNs, i.e., fermionic matter fields coupled to a U (1 ) gauge field, directly in the thermodynamic limit. With this idea in mind we first consider a gauge-invariant infinite density matrix renormalization group simulation of the Schwinger model—i.e., QED in (1 +1 ) d . After giving a precise description of the numerical method, we benchmark our simulations by computing the subtracted chiral condensate in the continuum, in good agreement with other approaches. Our simulations of the Schwinger model allow us to build intuition about how a simulation should proceed in (2 +1 ) dimensions. Based on this, we propose a variational ansatz using infinite projected entangled pair states (PEPS) to describe the ground state of (2 +1 ) d QED. The ansatz includes U (1 ) gauge symmetry at the level of the tensors, as well as fermionic (matter) and bosonic (gauge) degrees of freedom both at the physical and virtual levels. We argue that all the necessary ingredients for the simulation of (2 +1 ) d QED are, a priori, already in place, paving the way for future upcoming results.

  1. The evolution of hyperboloidal data with the dual foliation formalism: mathematical analysis and wave equation tests

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Harms, Enno; Bugner, Marcus; Rüter, Hannes; Brügmann, Bernd

    2018-03-01

    A long-standing problem in numerical relativity is the satisfactory treatment of future null-infinity. We propose an approach for the evolution of hyperboloidal initial data in which the outer boundary of the computational domain is placed at infinity. The main idea is to apply the ‘dual foliation’ formalism in combination with hyperboloidal coordinates and the generalized harmonic gauge formulation. The strength of the present approach is that, following the ideas of Zenginoğlu, a hyperboloidal layer can be naturally attached to a central region using standard coordinates of numerical relativity applications. Employing a generalization of the standard hyperboloidal slices, developed by Calabrese et al, we find that all formally singular terms take a trivial limit as we head to null-infinity. A byproduct is a numerical approach for hyperboloidal evolution of nonlinear wave equations violating the null-condition. The height-function method, used often for fixed background spacetimes, is generalized in such a way that the slices can be dynamically ‘waggled’ to maintain the desired outgoing coordinate lightspeed precisely. This is achieved by dynamically solving the eikonal equation. As a first numerical test of the new approach we solve the 3D flat space scalar wave equation. The simulations, performed with the pseudospectral bamps code, show that outgoing waves are cleanly absorbed at null-infinity and that errors converge away rapidly as resolution is increased.

  2. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan

    2012-10-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.

  3. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  4. Error Analysis of Magnetohydrodynamic Angular Rate Sensor Combing with Coriolis Effect at Low Frequency.

    PubMed

    Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi

    2018-06-13

    The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.

  5. The small-scale turbulent dynamo in smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tricco, T. S.; Price, D. J.; Federrath, C.

    2016-05-01

    Supersonic turbulence is believed to be at the heart of star formation. We have performed smoothed particle magnetohydrodynamics (SPMHD) simulations of the small- scale dynamo amplification of magnetic fields in supersonic turbulence. The calculations use isothermal gas driven at rms velocity of Mach 10 so that conditions are representative of starforming molecular clouds in the Milky Way. The growth of magnetic energy is followed for 10 orders in magnitude until it reaches saturation, a few percent of the kinetic energy. The results of our dynamo calculations are compared with results from grid-based methods, finding excellent agreement on their statistics and their qualitative behaviour. The simulations utilise the latest algorithmic developments we have developed, in particular, a new divergence cleaning approach to maintain the solenoidal constraint on the magnetic field and a method to reduce the numerical dissipation of the magnetic shock capturing scheme. We demonstrate that our divergence cleaning method may be used to achieve ∇ • B = 0 to machine precision, albeit at significant computational expense.

  6. Enhanced Photon Extraction from a Nanowire Quantum Dot Using a Bottom-Up Photonic Shell

    NASA Astrophysics Data System (ADS)

    Jeannin, Mathieu; Cremel, Thibault; Häyrynen, Teppo; Gregersen, Niels; Bellet-Amalric, Edith; Nogues, Gilles; Kheng, Kuntheak

    2017-11-01

    Semiconductor nanowires offer the possibility to grow high-quality quantum-dot heterostructures, and, in particular, CdSe quantum dots inserted in ZnSe nanowires have demonstrated the ability to emit single photons up to room temperature. In this paper, we demonstrate a bottom-up approach to fabricate a photonic fiberlike structure around such nanowire quantum dots by depositing an oxide shell using atomic-layer deposition. Simulations suggest that the intensity collected in our NA =0.6 microscope objective can be increased by a factor 7 with respect to the bare nanowire case. Combining microphotoluminescence, decay time measurements, and numerical simulations, we obtain a fourfold increase in the collected photoluminescence from the quantum dot. We show that this improvement is due to an increase of the quantum-dot emission rate and a redirection of the emitted light. Our ex situ fabrication technique allows a precise and reproducible fabrication on a large scale. Its improved extraction efficiency is compared to state-of-the-art top-down devices.

  7. Investigating fold structures of 2D materials by quantitative transmission electron microscopy.

    PubMed

    Wang, Zhiwei; Zhang, Zengming; Liu, Wei; Wang, Zhong Lin

    2017-04-01

    We report an approach developed for deriving 3D structural information of 2D membrane folds based on the recently-established quantitative transmission electron microscopy (TEM) in combination with density functional theory (DFT) calculations. Systematic multislice simulations reveal that the membrane folding leads to sufficiently strong electron scattering which enables a precise determination of bending radius. The image contrast depends also on the folding angles of 2D materials due to the variation of projection potentials, which however exerts much smaller effect compared with the bending radii. DFT calculations show that folded edges are typically characteristic of (fractional) nanotubes with the same curvature retained after energy optimization. Owing to the exclusion of Stobbs factor issue, numerical simulations were directly used in comparison with the experimental measurements on an absolute contrast scale, which results in a successful determination of bending radius of folded monolayer MoS 2 films. The method should be applicable to characterizing all 2D membranes with 3D folding features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The Investigation of Ghost Fluid Method for Simulating the Compressible Two-Medium Flow

    NASA Astrophysics Data System (ADS)

    Lu, Hai Tian; Zhao, Ning; Wang, Donghong

    2016-06-01

    In this paper, we investigate the conservation error of the two-dimensional compressible two-medium flow simulated by the front tracking method. As the improved versions of the original ghost fluid method, the modified ghost fluid method and the real ghost fluid method are selected to define the interface boundary conditions, respectively, to show different effects on the conservation error. A Riemann problem is constructed along the normal direction of the interface in the front tracking method, with the goal of obtaining an efficient procedure to track the explicit sharp interface precisely. The corresponding Riemann solutions are also used directly in these improved ghost fluid methods. Extensive numerical examples including the sod tube and the shock-bubble interaction are tested to calculate the conservation error. It is found that these two ghost fluid methods have distinctive performances for different initial conditions of the flow field, and the related conclusions are made to suggest the best choice for the combination.

  9. Computational fluid dynamics simulation of sound propagation through a blade row.

    PubMed

    Zhao, Lei; Qiao, Weiyang; Ji, Liang

    2012-10-01

    The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.

  10. Upgrade of the gas flow control system of the resistive current leads of the LHC inner triplet magnets: Simulation and experimental validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perin, A.; Casas-Cubillos, J.; Pezzetti, M.

    2014-01-29

    The 600 A and 120 A circuits of the inner triplet magnets of the Large Hadron Collider are powered by resistive gas cooled current leads. The current solution for controlling the gas flow of these leads has shown severe operability limitations. In order to allow a more precise and more reliable control of the cooling gas flow, new flowmeters will be installed during the first long shutdown of the LHC. Because of the high level of radiation in the area next to the current leads, the flowmeters will be installed in shielded areas located up to 50 m away frommore » the current leads. The control valves being located next to the current leads, this configuration leads to long piping between the valves and the flowmeters. In order to determine its dynamic behaviour, the proposed system was simulated with a numerical model and validated with experimental measurements performed on a dedicated test bench.« less

  11. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  12. Generalised optical differentiation wavefront sensor: a sensitive high dynamic range wavefront sensor.

    PubMed

    Haffert, S Y

    2016-08-22

    Current wavefront sensors for high resolution imaging have either a large dynamic range or a high sensitivity. A new kind of wavefront sensor is developed which can have both: the Generalised Optical Differentiation wavefront sensor. This new wavefront sensor is based on the principles of optical differentiation by amplitude filters. We have extended the theory behind linear optical differentiation and generalised it to nonlinear filters. We used numerical simulations and laboratory experiments to investigate the properties of the generalised wavefront sensor. With this we created a new filter that can decouple the dynamic range from the sensitivity. These properties make it suitable for adaptive optic systems where a large range of phase aberrations have to be measured with high precision.

  13. Shock Hugoniot and equations of states of water, castor oil, and aqueous solutions of sodium chloride, sucrose and gelatin

    NASA Astrophysics Data System (ADS)

    Gojani, A. B.; Ohtani, K.; Takayama, K.; Hosseini, S. H. R.

    2016-01-01

    This paper reports a result of experiments for the determination of reliable shock Hugoniot curves of liquids, in particular, at relatively low pressure region, which are needed to perform precise numerical simulations of shock wave/tissue interaction prior to the development of shock wave related therapeutic devices. Underwater shock waves were generated by explosions of laser ignited 10 mg silver azide pellets, which were temporally and spatially well controlled. Measuring temporal variation of shock velocities and over-pressures in caster oil, aqueous solutions of sodium chloride, sucrose and gelatin with various concentrations, we succeeded to determine shock Hugoniot curves of these liquids and hence parameters describing Tait type equations of state.

  14. Comparison of PASCAL and FORTRAN for solving problems in the physical sciences

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1981-01-01

    The paper compares PASCAL and FORTRAN for problem solving in the physical sciences, due to requests NASA has received to make PASCAL available on the Numerical Aerodynamic Simulator (scheduled to be operational in 1986). PASCAL disadvantages include the lack of scientific utility procedures equivalent to the IBM scientific subroutine package or the IMSL package which are available in FORTRAN. Advantages include a well-organized, easy to read and maintain writing code, range checking to prevent errors, and a broad selection of data types. It is concluded that FORTRAN may be the better language, although ADA (patterned after PASCAL) may surpass FORTRAN due to its ability to add complex and vector math, and the specify the precision and range of variables.

  15. Thermodynamic functions, freezing transition, and phase diagram of dense carbon-oxygen mixtures in white dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyetomi, H.; Ogata, S.; Ichimaru, S.

    1989-07-01

    Equations of state for dense carbon-oxygen (C-O) binary-ionic mixtures (BIM's) appropriate to the interiors of white dwarfs are investigated through Monte Carlo simulations, by solution of relevant integral equations andvariational calculations in the density-functional formalism. It is thereby shown that the internal energies of the C-O BIM solids and fluids both obey precisely the linear mixing formulas. We then present an accurate calculation of the phase diagram associated with freezing transitions in such BIM materials, resulting in a novel prediction of an azeotropic diagram. Discontinuities of the mass density across the azeotropic phase boundaries areevaluated numerically for application to amore » study of white-dwarf evolution.« less

  16. A tomographic technique for the simultaneous imaging of temperature, chemical species, and pressure in reactive flows using absorption spectroscopy with frequency-agile lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk

    2014-01-20

    This paper proposes a technique that can simultaneously retrieve distributions of temperature, concentration of chemical species, and pressure based on broad bandwidth, frequency-agile tomographic absorption spectroscopy. The technique holds particular promise for the study of dynamic combusting flows. A proof-of-concept numerical demonstration is presented, using representative phantoms to model conditions typically prevailing in near-atmospheric or high pressure flames. The simulations reveal both the feasibility of the proposed technique and its robustness. Our calculations indicate precisions of ∼70 K at flame temperatures and ∼0.05 bars at high pressure from reconstructions featuring as much as 5% Gaussian noise in the projections.

  17. Numerical test of the Edwards conjecture shows that all packings are equally probable at jamming

    NASA Astrophysics Data System (ADS)

    Martiniani, Stefano; Schrenk, K. Julian; Ramola, Kabir; Chakraborty, Bulbul; Frenkel, Daan

    2017-09-01

    In the late 1980s, Sam Edwards proposed a possible statistical-mechanical framework to describe the properties of disordered granular materials. A key assumption underlying the theory was that all jammed packings are equally likely. In the intervening years it has never been possible to test this bold hypothesis directly. Here we present simulations that provide direct evidence that at the unjamming point, all packings of soft repulsive particles are equally likely, even though generically, jammed packings are not. Typically, jammed granular systems are observed precisely at the unjamming point since grains are not very compressible. Our results therefore support Edwards’ original conjecture. We also present evidence that at unjamming the configurational entropy of the system is maximal.

  18. Numerical analysis of multicomponent responses of surface-hole transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Xin; Hu, Xiang-Yun; Pan, He-Ping; Zhou, Feng

    2017-03-01

    We calculate the multicomponent responses of surface-hole transient electromagnetic method. The methods and models are unsuitable as geoelectric models of conductive surrounding rocks because they are based on regular local targets. We also propose a calculation and analysis scheme based on numerical simulations of the subsurface transient electromagnetic fields. In the modeling of the electromagnetic fields, the forward modeling simulations are performed by using the finite-difference time-domain method and the discrete image method, which combines the Gaver-Stehfest inverse Laplace transform with the Prony method to solve the initial electromagnetic fields. The precision in the iterative computations is ensured by using the transmission boundary conditions. For the response analysis, we customize geoelectric models consisting of near-borehole targets and conductive wall rocks and implement forward modeling simulations. The observed electric fields are converted into induced electromotive force responses using multicomponent observation devices. By comparing the transient electric fields and multicomponent responses under different conditions, we suggest that the multicomponent-induced electromotive force responses are related to the horizontal and vertical gradient variations of the transient electric field at different times. The characteristics of the response are determined by the varying the subsurface transient electromagnetic fields, i.e., diffusion, attenuation and distortion, under different conditions as well as the electromagnetic fields at the observation positions. The calculation and analysis scheme of the response consider the surrounding rocks and the anomalous field of the local targets. It therefore can account for the geological data better than conventional transient field response analysis of local targets.

  19. Global numerical simulations of the rise of vortex-mediated pulsar glitches in full general relativity

    NASA Astrophysics Data System (ADS)

    Sourie, A.; Chamel, N.; Novak, J.; Oertel, M.

    2017-02-01

    In this paper, we study in detail the role of general relativity on the global dynamics of giant pulsar glitches as exemplified by Vela. For this purpose, we carry out numerical simulations of the spin up triggered by the sudden unpinning of superfluid vortices. In particular, we compute the exchange of angular momentum between the core neutron superfluid and the rest of the star within a two-fluid model including both (non-dissipative) entrainment effects and (dissipative) mutual friction forces. Our simulations are based on a quasi-stationary approach using realistic equations of state (EoSs). We show that the evolution of the angular velocities of both fluids can be accurately described by an exponential law. The associated characteristic rise time τr, which can be precisely computed from stationary configurations only, has a form similar to that obtained in the Newtonian limit. However, general relativity changes the structure of the star and leads to additional couplings between the fluids due to frame-dragging effects. As a consequence, general relativity can have a large impact on the actual value of τr: the errors incurred by using Newtonian gravity are thus found to be as large as ˜40 per cent for the models considered. Values of the rise time are calculated for Vela and compared with current observational limits. Finally, we study the amount of gravitational waves emitted during a glitch. Simple expressions are obtained for the corresponding characteristic amplitudes and frequencies. The detectability of glitches through gravitational wave observatories is briefly discussed.

  20. Scheduling Mission-Critical Flows in Congested and Contested Airborne Network Environments

    DTIC Science & Technology

    2018-03-01

    precision agriculture [64–71]. However, designing, implementing, and testing UAV networks poses numerous interdisciplinary challenges because the...applications including search and rescue, disaster relief, precision agriculture , environmental monitoring, and surveillance. Many of these applications...monitoring enabling precision agriculture ,” in Automation Science and Engineering (CASE), 2015 IEEE International Conference on. IEEE, 2015, pp. 462–469. [65

  1. Temporal and spatial temperature measurement in insulator-based dielectrophoretic devices.

    PubMed

    Nakano, Asuka; Luo, Jinghui; Ros, Alexandra

    2014-07-01

    Insulator-based dielectrophoresis is a relatively new analytical technique with a large potential for a number of applications, such as sorting, separation, purification, fractionation, and preconcentration. The application of insulator-based dielectrophoresis (iDEP) for biological samples, however, requires the precise control of the microenvironment with temporal and spatial resolution. Temperature variations during an iDEP experiment are a critical aspect in iDEP since Joule heating could lead to various detrimental effects hampering reproducibility. Additionally, Joule heating can potentially induce thermal flow and more importantly can degrade biomolecules and other biological species. Here, we investigate temperature variations in iDEP devices experimentally employing the thermosensitive dye Rhodamin B (RhB) and compare the measured results with numerical simulations. We performed the temperature measurement experiments at a relevant buffer conductivity range commonly used for iDEP applications under applied electric potentials. To this aim, we employed an in-channel measurement method and an alternative method employing a thin film located slightly below the iDEP channel. We found that the temperature does not deviate significantly from room temperature at 100 μS/cm up to 3000 V applied such as in protein iDEP experiments. At a conductivity of 300 μS/cm, such as previously used for mitochondria iDEP experiments at 3000 V, the temperature never exceeds 34 °C. This observation suggests that temperature effects for iDEP of proteins and mitochondria under these conditions are marginal. However, at larger conductivities (1 mS/cm) and only at 3000 V applied, temperature increases were significant, reaching a regime in which degradation is likely to occur. Moreover, the thin layer method resulted in lower temperature enhancement which was also confirmed with numerical simulations. We thus conclude that the thin film method is preferable providing closer agreement with numerical simulations and further since it does not depend on the iDEP channel material. Overall, our study provides a thorough comparison of two experimental techniques for direct temperature measurement, which can be adapted to a variety of iDEP applications in the future. The good agreement between simulation and experiment will also allow one to assess temperature variations for iDEP devices prior to experiments.

  2. 3D conformal MRI-controlled transurethral ultrasound prostate therapy: validation of numerical simulations and demonstration in tissue-mimicking gel phantoms.

    PubMed

    Burtnyk, Mathieu; N'Djin, William Apoutou; Kobelevskiy, Ilya; Bronskill, Michael; Chopra, Rajiv

    2010-11-21

    MRI-controlled transurethral ultrasound therapy uses a linear array of transducer elements and active temperature feedback to create volumes of thermal coagulation shaped to predefined prostate geometries in 3D. The specific aims of this work were to demonstrate the accuracy and repeatability of producing large volumes of thermal coagulation (>10 cc) that conform to 3D human prostate shapes in a tissue-mimicking gel phantom, and to evaluate quantitatively the accuracy with which numerical simulations predict these 3D heating volumes under carefully controlled conditions. Eleven conformal 3D experiments were performed in a tissue-mimicking phantom within a 1.5T MR imager to obtain non-invasive temperature measurements during heating. Temperature feedback was used to control the rotation rate and ultrasound power of transurethral devices with up to five 3.5 × 5 mm active transducer elements. Heating patterns shaped to human prostate geometries were generated using devices operating at 4.7 or 8.0 MHz with surface acoustic intensities of up to 10 W cm(-2). Simulations were informed by transducer surface velocity measurements acquired with a scanning laser vibrometer enabling improved calculations of the acoustic pressure distribution in a gel phantom. Temperature dynamics were determined according to a FDTD solution to Pennes' BHTE. The 3D heating patterns produced in vitro were shaped very accurately to the prostate target volumes, within the spatial resolution of the MRI thermometry images. The volume of the treatment difference falling outside ± 1 mm of the target boundary was, on average, 0.21 cc or 1.5% of the prostate volume. The numerical simulations predicted the extent and shape of the coagulation boundary produced in gel to within (mean ± stdev [min, max]): 0.5 ± 0.4 [-1.0, 2.1] and -0.05 ± 0.4 [-1.2, 1.4] mm for the treatments at 4.7 and 8.0 MHz, respectively. The temperatures across all MRI thermometry images were predicted within -0.3 ± 1.6 °C and 0.1 ± 0.6 °C, inside and outside the prostate respectively, and the treatment time to within 6.8 min. The simulations also showed excellent agreement in regions of sharp temperature gradients near the transurethral and endorectal cooling devices. Conformal 3D volumes of thermal coagulation can be precisely matched to prostate shapes with transurethral ultrasound devices and active MRI temperature feedback. The accuracy of numerical simulations for MRI-controlled transurethral ultrasound prostate therapy was validated experimentally, reinforcing their utility as an effective treatment planning tool.

  3. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  4. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; McPherson, Brian J.; Grigg, Reid B.

    Numerical simulation is an invaluable analytical tool for scientists and engineers in making predictions about of the fate of carbon dioxide injected into deep geologic formations for long-term storage. Current numerical simulators for assessing storage in deep saline formations have capabilities for modeling strongly coupled processes involving multifluid flow, heat transfer, chemistry, and rock mechanics in geologic media. Except for moderate pressure conditions, numerical simulators for deep saline formations only require the tracking of two immiscible phases and a limited number of phase components, beyond those comprising the geochemical reactive system. The requirements for numerically simulating the utilization and storagemore » of carbon dioxide in partially depleted petroleum reservoirs are more numerous than those for deep saline formations. The minimum number of immiscible phases increases to three, the number of phase components may easily increase fourfold, and the coupled processes of heat transfer, geochemistry, and geomechanics remain. Public and scientific confidence in the ability of numerical simulators used for carbon dioxide sequestration in deep saline formations has advanced via a natural progression of the simulators being proven against benchmark problems, code comparisons, laboratory-scale experiments, pilot-scale injections, and commercial-scale injections. This paper describes a new numerical simulator for the scientific investigation of carbon dioxide utilization and storage in partially depleted petroleum reservoirs, with an emphasis on its unique features for scientific investigations; and documents the numerical simulation of the utilization of carbon dioxide for enhanced oil recovery in the western section of the Farnsworth Unit and represents an early stage in the progression of numerical simulators for carbon utilization and storage in depleted oil reservoirs.« less

  6. Analysis of 440 GeV proton beam-matter interaction experiments at the High Radiation Materials test facility at CERN

    NASA Astrophysics Data System (ADS)

    Burkart, F.; Schmidt, R.; Raginel, V.; Wollmann, D.; Tahir, N. A.; Shutov, A.; Piriz, A. R.

    2015-08-01

    In a previous paper [Schmidt et al., Phys. Plasmas 21, 080701 (2014)], we presented the first results on beam-matter interaction experiments that were carried out at the High Radiation Materials test facility at CERN. In these experiments, extended cylindrical targets of solid copper were irradiated with beam of 440 GeV protons delivered by the Super Proton Synchrotron (SPS). The beam comprised of a large number of high intensity proton bunches, each bunch having a length of 0.5 ns with a 50 ns gap between two neighboring bunches, while the length of this entire bunch train was about 7 μs. These experiments established the existence of the hydrodynamic tunneling phenomenon the first time. Detailed numerical simulations of these experiments were also carried out which were reported in detail in another paper [Tahir et al., Phys. Rev. E 90, 063112 (2014)]. Excellent agreement was found between the experimental measurements and the simulation results that validate our previous simulations done using the Large Hadron Collider (LHC) beam of 7 TeV protons [Tahir et al., Phys. Rev. Spec. Top.--Accel. Beams 15, 051003 (2012)]. According to these simulations, the range of the full LHC proton beam and the hadronic shower can be increased by more than an order of magnitude due to the hydrodynamic tunneling, compared to that of a single proton. This effect is of considerable importance for the design of machine protection system for hadron accelerators such as SPS, LHC, and Future Circular Collider. Recently, using metal cutting technology, the targets used in these experiments have been dissected into finer pieces for visual and microscopic inspection in order to establish the precise penetration depth of the protons and the corresponding hadronic shower. This, we believe will be helpful in studying the very important phenomenon of hydrodynamic tunneling in a more quantitative manner. The details of this experimental work together with a comparison with the numerical simulations are presented in this paper.

  7. Modeling the October 2005 lahars at Panabaj (Guatemala)

    NASA Astrophysics Data System (ADS)

    Charbonnier, S. J.; Connor, C. B.; Connor, L. J.; Sheridan, M. F.; Oliva Hernández, J. P.; Richardson, J. A.

    2018-01-01

    An extreme rainfall event in October of 2005 triggered two deadly lahars on the flanks of Tolimán volcano (Guatemala) that caused many fatalities in the village of Panabaj. We mapped the deposits of these lahars, then developed computer simulations of the lahars using the geologic data and compared simulated area inundated by the flows to mapped area inundated. Computer simulation of the two lahars was dramatically improved after calibration with geological data. Specifically, detailed field measurements of flow inundation area, flow thickness, flow direction, and velocity estimates, collected after lahar emplacement, were used to calibrate the rheological input parameters for the models, including deposit volume, yield strength, sediment and water concentrations, and Manning roughness coefficients. Simulations of the two lahars, with volumes of 240,200 ± 55,400 and 126,000 ± 29,000 m3, using the FLO-2D computer program produced models of lahar runout within 3% of measured runouts and produced reasonable estimates of flow thickness and velocity along the lengths of the simulated flows. We compare areas inundated using the Jaccard fit, model sensitivity, and model precision metrics, all related to Bayes' theorem. These metrics show that false negatives (areas inundated by the observed lahar where not simulated) and false positives (areas not inundated by the observed lahar where inundation was simulated) are reduced using a model calibrated by rheology. The metrics offer a procedure for tuning model performance that will enhance model accuracy and make numerical models a more robust tool for natural hazard reduction.

  8. A New Numerical Simulation technology of Multistage Fracturing in Horizontal Well

    NASA Astrophysics Data System (ADS)

    Cheng, Ning; Kang, Kaifeng; Li, Jianming; Liu, Tao; Ding, Kun

    2017-11-01

    Horizontal multi-stage fracturing is recognized the effective development technology of unconventional oil resources. Geological mechanics in the numerical simulation of hydraulic fracturing technology occupies very important position, compared with the conventional numerical simulation technology, because of considering the influence of geological mechanics. New numerical simulation of hydraulic fracturing can more effectively optimize the design of fracturing and evaluate the production after fracturing. This paper studies is based on the three-dimensional stress and rock physics parameters model, using the latest fluid-solid coupling numerical simulation technology to engrave the extension process of fracture and describes the change of stress field in fracturing process, finally predict the production situation.

  9. A new shock-capturing numerical scheme for ideal hydrodynamics

    NASA Astrophysics Data System (ADS)

    Fecková, Z.; Tomášik, B.

    2015-05-01

    We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.

  10. Impact of detector simulation in particle physics collider experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elvira, V. Daniel

    Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less

  11. Impact of detector simulation in particle physics collider experiments

    DOE PAGES

    Elvira, V. Daniel

    2017-06-01

    Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less

  12. Impact of detector simulation in particle physics collider experiments

    NASA Astrophysics Data System (ADS)

    Daniel Elvira, V.

    2017-06-01

    Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.

  13. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    PubMed Central

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  14. Advanced Coupled Simulation of Borehole Thermal Energy Storage Systems and Above Ground Installations

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Bär, Kristian; Sass, Ingo

    2016-04-01

    Seasonal thermal energy storage in borehole heat exchanger arrays is a promising technology to reduce primary energy consumption and carbon dioxide emissions. These systems usually consist of several subsystems like the heat source (e.g. solarthermics or a combined heat and power plant), the heat consumer (e.g. a heating system), diurnal storages (i.e. water tanks), the borehole thermal energy storage, additional heat sources for peak load coverage (e.g. a heat pump or a gas boiler) and the distribution network. For the design of an integrated system, numerical simulations of all subsystems are imperative. A separate simulation of the borehole energy storage is well-established but represents a simplification. In reality, the subsystems interact with each other. The fluid temperatures of the heat generation system, the heating system and the underground storage are interdependent and affect the performance of each subsystem. To take into account these interdependencies, we coupled a software for the simulation of the above ground facilities with a finite element software for the modeling of the heat flow in the subsurface and the borehole heat exchangers. This allows for a more realistic view on the entire system. Consequently, a finer adjustment of the system components and a more precise prognosis of the system's performance can be ensured.

  15. Effects of Initial Particle Distribution on an Energetic Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Rollin, Bertrand; Ouellet, Frederick; Koneru, Rahul; Garno, Joshua; Durant, Bradford

    2017-11-01

    Accurate predictions of the late time solid particle cloud distribution ensuing an explosive dispersal of particles is an extremely challenging problem for compressible multiphase flow simulations. The source of this difficulty is twofold: (i) The complex sequence of events taking place. Indeed, as the blast wave crosses the surrounding layer of particles, compaction occurs shortly before particles disperse radially at high speed. Then, during the dispersion phase, complex multiphase interactions occurs between particles and detonation products. (ii) Precise characterization of the explosive and particle distribution is virtually impossible. In this numerical experiment, we focus on the sensitivity of late time particle cloud distributions relative to carefully designed initial distributions, assuming the explosive is well described. Using point particle simulations, we study the case of a bed of glass particles surrounding an explosive. Constraining our simulations to relatively low initial volume fractions to prevent reaching of the close packing limit, we seek to describe qualitatively and quantitatively the late time dependency of a solid particle cloud on its distribution before the energy release of an explosive. This work was supported by the U.S. DoE, NNSA, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  16. A novel model for simulating the racing effect in capillary-driven underfill process in flip chip

    NASA Astrophysics Data System (ADS)

    Zhu, Wenhui; Wang, Kanglun; Wang, Yan

    2018-04-01

    Underfill is typically applied in flip chips to increase the reliability of the electronic packagings. In this paper, the evolution of the melt-front shape of the capillary-driven underfill flow is studied through 3D numerical analysis. Two different models, the prevailing surface force model and the capillary model based on the wetted wall boundary condition, are introduced to test their applicability, where level set method is used to track the interface of the two phase flow. The comparison between the simulation results and experimental data indicates that, the surface force model produces better prediction on the melt-front shape, especially in the central area of the flip chip. Nevertheless, the two above models cannot simulate properly the racing effect phenomenon that appears during underfill encapsulation. A novel ‘dynamic pressure boundary condition’ method is proposed based on the validated surface force model. Utilizing this approach, the racing effect phenomenon is simulated with high precision. In addition, a linear relationship is derived from this model between the flow front location at the edge of the flip chip and the filling time. Using the proposed approach, the impact of the underfill-dispensing length on the melt-front shape is also studied.

  17. Computational fluid dynamic (CFD) investigation of thermal uniformity in a thermal cycling based calibration chamber for MEMS

    NASA Astrophysics Data System (ADS)

    Gui, Xulong; Luo, Xiaobing; Wang, Xiaoping; Liu, Sheng

    2015-12-01

    Micro-electrical-mechanical system (MEMS) has become important for many industries such as automotive, home appliance, portable electronics, especially with the emergence of Internet of Things. Volume testing with temperature compensation has been essential in order to provide MEMS based sensors with repeatability, consistency, reliability, and durability, but low cost. Particularly, in the temperature calibration test, temperature uniformity of thermal cycling based calibration chamber becomes more important for obtaining precision sensors, as each sensor is different before the calibration. When sensor samples are loaded into the chamber, we usually open the door of the chamber, then place fixtures into chamber and mount the samples on the fixtures. These operations may affect temperature uniformity in the chamber. In order to study the influencing factors of sample-loading on the temperature uniformity in the chamber during calibration testing, numerical simulation work was conducted first. Temperature field and flow field were simulated in empty chamber, chamber with open door, chamber with samples, and chamber with fixtures, respectively. By simulation, it was found that opening chamber door, sample size and number of fixture layers all have effects on flow field and temperature field. By experimental validation, it was found that the measured temperature value was consistent with the simulated temperature value.

  18. Fourier Transform Fringe-Pattern Analysis of an Absolute Distance Michelson Interferometer for Space-Based Laser Metrology.

    NASA Astrophysics Data System (ADS)

    Talamonti, James Joseph

    1995-01-01

    Future NASA proposals include the placement of optical interferometer systems in space for a wide variety of astrophysical studies including a vastly improved deflection test of general relativity, a precise and direct calibration of the Cepheid distance scale, and the determination of stellar masses (Reasenberg et al., 1988). There are also plans for placing large array telescopes on the moon with the ultimate objective of being able to measure angular separations of less than 10 mu-arc seconds (Burns, 1990). These and other future projects will require interferometric measurement of the (baseline) distance between the optical elements comprising the systems. Eventually, space qualifiable interferometers capable of picometer (10^{-12}m) relative precision and nanometer (10^{ -9}m) absolute precision will be required. A numerical model was developed to emulate the capabilities of systems performing interferometric noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation using Hanning, Blackman, and Gaussian windows in the Fast Fourier Transform Technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer using a frequency scanned laser. By processing computer simulated data through our model, the ultimate precision is projected for ideal data, and data containing AM/FM noise. The precision is shown to be limited by non-linearities in the laser scan. A laboratory system was developed by implementing ultra-stable external cavity diode lasers into existing interferometric measuring techniques. The capabilities of the system were evaluated and increased by using the computer modeling results as guidelines for the data analysis. Experimental results measured 1-3 meter baselines with <20 micron precision. Comparison of the laboratory and modeling results showed that the laboratory precisions obtained were of the same order of magnitude as those predicted for computer generated results under similar conditions. We believe that our model can be implemented as a tool in the design for new metrology systems capable of meeting the precisions required by space-based interferometers.

  19. The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…

  20. Aerial imaging with manned aircraft for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Over the last two decades, numerous commercial and custom-built airborne imaging systems have been developed and deployed for diverse remote sensing applications, including precision agriculture. More recently, unmanned aircraft systems (UAS) have emerged as a versatile and cost-effective platform f...

  1. Airborne and satellite remote sensors for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Remote sensing provides an important source of information to characterize soil and crop variability for both within-season and after-season management despite the availability of numerous ground-based soil and crop sensors. Remote sensing applications in precision agriculture have been steadily inc...

  2. Localization algorithms for micro-channel x-ray telescope on board SVOM space mission

    NASA Astrophysics Data System (ADS)

    Gosset, L.; Götz, D.; Osborne, J.; Willingale, R.

    2016-07-01

    SVOM is a French-Chinese space mission to be launched in 2021, whose goal is the study of Gamma-Ray Bursts, the most powerful stellar explosions in the Universe. The Micro-channel X-ray Telescope (MXT) is an X-ray focusing telescope, on board SVOM, with a field of view of 1 degree (working in the 0.2-10 keV energy band), dedicated to the rapid follow-up of the Gamma-Ray Bursts counterparts and to their precise localization (smaller than 2 arc minutes). In order to reduce the optics mass and to have an angular resolution of few arc minutes, a "lobster-Eye" configuration has been chosen. Using a numerical model of the MXT Point Spread Function (PSF) we simulated MXT observations of point sources in order to develop and test different localization algorithms to be implemented on board MXT. We included preliminary estimations of the instrumental and sky background. The algorithms on board have to be a combination of speed and precision (the brightest sources are expected to be localized at a precision better than 10 arc seconds in the MXT reference frame). We present the comparison between different methods such as barycentre, PSF fitting in one or two dimensions. The temporal performance of the algorithms is being tested using the X-ray afterglow data base of the XRT telescope on board the NASA Swift satellite.

  3. Improving Functional Magnetic Resonance Imaging Motor Studies Through Simultaneous Electromyography Recordings

    PubMed Central

    MacIntosh, Bradley J.; Baker, S. Nicole; Mraz, Richard; Ives, John R.; Martel, Anne L.; McIlroy, William E.; Graham, Simon J.

    2016-01-01

    Specially designed optoelectronic and data postprocessing methods are described that permit electromyography (EMG) of muscle activity simultaneous with functional MRI (fMRI). Hardware characterization and validation included simultaneous EMG and event-related fMRI in 17 healthy participants during either ankle (n = 12), index finger (n = 3), or wrist (n = 2) contractions cued by visual stimuli. Principal component analysis (PCA) and independent component analysis (ICA) were evaluated for their ability to remove residual fMRI gradient-induced signal contamination in EMG data. Contractions of ankle tibialis anterior and index finger abductor were clearly distinguishable, although observing contractions from the wrist flexors proved more challenging. To demonstrate the potential utility of simultaneous EMG and fMRI, data from the ankle experiments were analyzed using two approaches: 1) assuming contractions coincided precisely with visual cues, and 2) using EMG to time the onset and offset of muscle contraction precisely for each participant. Both methods produced complementary activation maps, although the EMG-guided approach recovered more active brain voxels and revealed activity better in the basal ganglia and cerebellum. Furthermore, numerical simulations confirmed that precise knowledge of behavioral responses, such as those provided by EMG, are much more important for event-related experimental designs compared to block designs. This simultaneous EMG and fMRI methodology has important applications where the amplitude or timing of motor output is impaired, such as after stroke. PMID:17133382

  4. Improving functional magnetic resonance imaging motor studies through simultaneous electromyography recordings.

    PubMed

    MacIntosh, Bradley J; Baker, S Nicole; Mraz, Richard; Ives, John R; Martel, Anne L; McIlroy, William E; Graham, Simon J

    2007-09-01

    Specially designed optoelectronic and data postprocessing methods are described that permit electromyography (EMG) of muscle activity simultaneous with functional MRI (fMRI). Hardware characterization and validation included simultaneous EMG and event-related fMRI in 17 healthy participants during either ankle (n = 12), index finger (n = 3), or wrist (n = 2) contractions cued by visual stimuli. Principal component analysis (PCA) and independent component analysis (ICA) were evaluated for their ability to remove residual fMRI gradient-induced signal contamination in EMG data. Contractions of ankle tibialis anterior and index finger abductor were clearly distinguishable, although observing contractions from the wrist flexors proved more challenging. To demonstrate the potential utility of simultaneous EMG and fMRI, data from the ankle experiments were analyzed using two approaches: 1) assuming contractions coincided precisely with visual cues, and 2) using EMG to time the onset and offset of muscle contraction precisely for each participant. Both methods produced complementary activation maps, although the EMG-guided approach recovered more active brain voxels and revealed activity better in the basal ganglia and cerebellum. Furthermore, numerical simulations confirmed that precise knowledge of behavioral responses, such as those provided by EMG, are much more important for event-related experimental designs compared to block designs. This simultaneous EMG and fMRI methodology has important applications where the amplitude or timing of motor output is impaired, such as after stroke. (c) 2006 Wiley-Liss, Inc.

  5. Optimization of Borehole Thermal Energy Storage System Design Using Comprehensive Coupled Simulation Models

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Formhals, Julian; Bär, Kristian; Sass, Ingo

    2017-04-01

    Large-scale borehole thermal energy storage (BTES) is a promising technology in the development of sustainable, renewable and low-emission district heating concepts. Such systems consist of several components and assemblies like the borehole heat exchangers (BHE), other heat sources (e.g. solarthermics, combined heat and power plants, peak load boilers, heat pumps), distribution networks and heating installations. The complexity of these systems necessitates numerical simulations in the design and planning phase. Generally, the subsurface components are simulated separately from the above ground components of the district heating system. However, as fluid and heat are exchanged, the subsystems interact with each other and thereby mutually affect their performances. For a proper design of the overall system, it is therefore imperative to take into account the interdependencies of the subsystems. Based on a TCP/IP communication we have developed an interface for the coupling of a simulation package for heating installations with a finite element software for the modeling of the heat flow in the subsurface and the underground installations. This allows for a co-simulation of all system components, whereby the interaction of the different subsystems is considered. Furthermore, the concept allows for a mathematical optimization of the components and the operational parameters. Consequently, a finer adjustment of the system can be ensured and a more precise prognosis of the system's performance can be realized.

  6. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  7. Can APEX Represent In-Field Spatial Variability and Simulate Its Effects On Crop Yields?

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture, from variable rate nitrogen application to precision irrigation, promises improved management of resources by considering the spatial variability of topography and soil properties. Hydrologic models need to simulate the effects of this variability if they are to inform about t...

  8. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  9. Accounting for baryonic effects in cosmic shear tomography: Determining a minimal set of nuisance parameters using PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifler, Tim; Krause, Elisabeth; Dodelson, Scott

    2014-05-28

    Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less

  10. Observability analysis of DVL/PS aided INS for a maneuvering AUV.

    PubMed

    Klein, Itzik; Diamant, Roee

    2015-10-22

    Recently, ocean exploration has increased considerably through the use of autonomous underwater vehicles (AUV). A key enabling technology is the precision of the AUV navigation capability. In this paper, we focus on understanding the limitation of the AUV navigation system. That is, what are the observable error-states for different maneuvering types of the AUV? Since analyzing the performance of an underwater navigation system is highly complex, to answer the above question, current approaches use simulations. This, of course, limits the conclusions to the emulated type of vehicle used and to the simulation setup. For this reason, we take a different approach and analyze the system observability for different types of vehicle dynamics by finding the set of observable and unobservable states. To that end, we apply the observability Gramian approach, previously used only for terrestrial applications. We demonstrate our analysis for an underwater inertial navigation system aided by a Doppler velocity logger or by a pressure sensor. The result is a first prediction of the performance of an AUV standing, rotating at a position and turning at a constant speed. Our conclusions of the observable and unobservable navigation error states for different dynamics are supported by extensive numerical simulation.

  11. Observability Analysis of DVL/PS Aided INS for a Maneuvering AUV

    PubMed Central

    Klein, Itzik; Diamant, Roee

    2015-01-01

    Recently, ocean exploration has increased considerably through the use of autonomous underwater vehicles (AUV). A key enabling technology is the precision of the AUV navigation capability. In this paper, we focus on understanding the limitation of the AUV navigation system. That is, what are the observable error-states for different maneuvering types of the AUV? Since analyzing the performance of an underwater navigation system is highly complex, to answer the above question, current approaches use simulations. This, of course, limits the conclusions to the emulated type of vehicle used and to the simulation setup. For this reason, we take a different approach and analyze the system observability for different types of vehicle dynamics by finding the set of observable and unobservable states. To that end, we apply the observability Gramian approach, previously used only for terrestrial applications. We demonstrate our analysis for an underwater inertial navigation system aided by a Doppler velocity logger or by a pressure sensor. The result is a first prediction of the performance of an AUV standing, rotating at a position and turning at a constant speed. Our conclusions of the observable and unobservable navigation error states for different dynamics are supported by extensive numerical simulation. PMID:26506356

  12. Thermostatted {delta}f

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krommes, J.A.

    1999-05-01

    The {delta}f simulation method is revisited. Statistical coarse graining is used to rigorously derive the equation for the fluctuation {delta}f in the particle distribution. It is argued that completely collisionless simulation is incompatible with the achievement of true statistically steady states with nonzero turbulent fluxes because the variance {ital W} of the particle weights {ital w} grows with time. To ensure such steady states, it is shown that for dynamically collisionless situations a generalized thermostat or {open_quotes}{ital W} stat{close_quotes} may be used in lieu of a full collision operator to absorb the flow of entropy to unresolved fine scales inmore » velocity space. The simplest {ital W} stat can be implemented as a self-consistently determined, time-dependent damping applied to {ital w}. A precise kinematic analogy to thermostatted nonequilibrium molecular dynamics is pointed out, and the justification of {ital W} stats for simulations of turbulence is discussed. An extrapolation procedure is proposed such that the long-time, steady-state, collisionless flux can be deduced from several short {ital W}-statted runs with large effective collisionality, and a numerical demonstration is given. {copyright} {ital 1999 American Institute of Physics.}« less

  13. Genetic drift and selection in many-allele range expansions.

    PubMed

    Weinstein, Bryan T; Lavrentovich, Maxim O; Möbius, Wolfram; Murray, Andrew W; Nelson, David R

    2017-12-01

    We experimentally and numerically investigate the evolutionary dynamics of four competing strains of E. coli with differing expansion velocities in radially expanding colonies. We compare experimental measurements of the average fraction, correlation functions between strains, and the relative rates of genetic domain wall annihilations and coalescences to simulations modeling the population as a one-dimensional ring of annihilating and coalescing random walkers with deterministic biases due to selection. The simulations reveal that the evolutionary dynamics can be collapsed onto master curves governed by three essential parameters: (1) an expansion length beyond which selection dominates over genetic drift; (2) a characteristic angular correlation describing the size of genetic domains; and (3) a dimensionless constant quantifying the interplay between a colony's curvature at the frontier and its selection length scale. We measure these parameters with a new technique that precisely measures small selective differences between spatially competing strains and show that our simulations accurately predict the dynamics without additional fitting. Our results suggest that the random walk model can act as a useful predictive tool for describing the evolutionary dynamics of range expansions composed of an arbitrary number of genotypes with different fitnesses.

  14. Control of Warm Compression Stations Using Model Predictive Control: Simulation and Experimental Results

    NASA Astrophysics Data System (ADS)

    Bonne, F.; Alamir, M.; Bonnay, P.

    2017-02-01

    This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.

  15. Evaluation of near-surface stress distributions in dissimilar welded joint by scanning acoustic microscopy.

    PubMed

    Kwak, Dong Ryul; Yoshida, Sanichiro; Sasaki, Tomohiro; Todd, Judith A; Park, Ik Keun

    2016-04-01

    This paper presents the results from a set of experiments designed to ultrasonically measure the near surface stresses distributed within a dissimilar metal welded plate. A scanning acoustic microscope (SAM), with a tone-burst ultrasonic wave frequency of 200 MHz, was used for the measurement of near surface stresses in the dissimilar welded plate between 304 stainless steel and low carbon steel. For quantitative data acquisition such as leaky surface acoustic wave (leaky SAW) velocity measurement, a point focus acoustic lens of frequency 200 MHz was used and the leaky SAW velocities within the specimen were precisely measured. The distributions of the surface acoustic wave velocities change according to the near-surface stresses within the joint. A three dimensional (3D) finite element simulation was carried out to predict numerically the stress distributions and compare with the experimental results. The experiment and FE simulation results for the dissimilar welded plate showed good agreement. This research demonstrates that a combination of FE simulation and ultrasonic stress measurements using SAW velocity distributions appear promising for determining welding residual stresses in dissimilar material joints. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Far Sidelobe Effects from Panel Gaps of the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    Fluxa, Pedro R.; Duenner, Rolando; Maurin, Loiec; Choi, Steve K.; Devlin, Mark J.; Gallardo, Patricio A.; Shuay-Pwu, P. Ho; Koopman, Brian J.; Louis, Thibaut; Wollack, Edward J.

    2016-01-01

    The Atacama Cosmology Telescope is a 6 meter diameter CMB telescope located at 5200 meters in the Chilean desert. ACT has made arc-minute scale maps of the sky at 90 and 150 GHz which have led to precise measurements of the fine angular power spectrum of the CMB fluctuations in temperature and polarization. One of the goals of ACT is to search for the B-mode polarization signal from primordial gravity waves, and thus extending ACT's data analysis to larger angular scales. This goal introduces new challenges in the control of systematic effects, including better understanding of far sidelobe effects that might enter the power spectrum at degree angular scales. Here we study the effects of the gaps between panels of the ACT primary and secondary reflectors in the worst case scenario in which the gaps remain open. We produced numerical simulations of the optics using GRASP up to 8 degrees away from the main beam and simulated timestreams for observations with this beam using real pointing information from ACT data. Maps from these simulated timestreams showed leakage from the sidelobes, indicating that this effect must be taken into consideration at large angular scales.

  17. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  18. Numerical Simulation of Temperature Sensor Self-Heating Effects in Gaseous and Liquid Hydrogen Under Cryogenic Conditions

    NASA Astrophysics Data System (ADS)

    Langebach, R.; Haberstroh, Ch.

    2010-04-01

    In this paper a numerical investigation is presented that characterizes the free convective flow field and the resulting heat transfer mechanisms for a resistance temperature sensor in liquid and gaseous hydrogen at various cryogenic conditions. Motivation for this is the detection of stratification effects e.g. inside a liquid hydrogen storage vessel. In this case, the local temperature measurement in still resting fluid requires a very high standard of precision despite an extremely poor thermal anchoring of the sensor. Due to electrical power dissipation a certain amount of heat has to be transferred from sensor to fluid. This can cause relevant measurement errors due to a slightly elevated sensor temperature. A commercial CFD code was employed to calculate the heat and mass transfer around the typical sensor geometry. The results were compared with existing heat transfer correlations from the literature. As a result the magnitude of averaged heat transfer coefficients and sensor over-heating as a function of power dissipation are given in figures. From the gained numerical results a new correlation for the averaged Nusselt Number is presented that represents very low Rayleigh Number flows. The correlation can be used to estimate sensor self-heating effects in similar situations.

  19. Metriplectic Gyrokinetics and Discretization Methods for the Landau Collision Integral

    NASA Astrophysics Data System (ADS)

    Hirvijoki, Eero; Burby, Joshua W.; Kraus, Michael

    2017-10-01

    We present two important results for the kinetic theory and numerical simulation of warm plasmas: 1) We provide a metriplectic formulation of collisional electrostatic gyrokinetics that is fully consistent with the First and Second Laws of Thermodynamics. 2) We provide a metriplectic temporal and velocity-space discretization for the particle phase-space Landau collision integral that satisfies the conservation of energy, momentum, and particle densities to machine precision, as well as guarantees the existence of numerical H-theorem. The properties are demonstrated algebraically. These two result have important implications: 1) Numerical methods addressing the Vlasov-Maxwell-Landau system of equations, or its reduced gyrokinetic versions, should start from a metriplectic formulation to preserve the fundamental physical principles also at the discrete level. 2) The plasma physics community should search for a metriplectic reduction theory that would serve a similar purpose as the existing Lagrangian and Hamiltonian reduction theories do in gyrokinetics. The discovery of metriplectic formulation of collisional electrostatic gyrokinetics is strong evidence in favor of such theory and, if uncovered, the theory would be invaluable in constructing reduced plasma models. Supported by U.S. DOE Contract Nos. DE-AC02-09-CH11466 (EH) and DE-AC05-06OR23100 (JWB) and by European Union's Horizon 2020 research and innovation Grant No. 708124 (MK).

  20. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  1. Numerical simulation of the processes in the normal incidence tube for high acoustic pressure levels

    NASA Astrophysics Data System (ADS)

    Fedotov, E. S.; Khramtsov, I. V.; Kustov, O. Yu.

    2016-10-01

    Numerical simulation of the acoustic processes in an impedance tube at high levels of acoustic pressure is a way to solve a problem of noise suppressing by liners. These studies used liner specimen that is one cylindrical Helmholtz resonator. The evaluation of the real and imaginary parts of the liner acoustic impedance and sound absorption coefficient was performed for sound pressure levels of 130, 140 and 150 dB. The numerical simulation used experimental data having been obtained on the impedance tube with normal incidence waves. At the first stage of the numerical simulation it was used the linearized Navier-Stokes equations, which describe well the imaginary part of the liner impedance whatever the sound pressure level. These equations were solved by finite element method in COMSOL Multiphysics program in axisymmetric formulation. At the second stage, the complete Navier-Stokes equations were solved by direct numerical simulation in ANSYS CFX in axisymmetric formulation. As the result, the acceptable agreement between numerical simulation and experiment was obtained.

  2. Black Holes, Gravitational Waves, and LISA

    NASA Technical Reports Server (NTRS)

    Baker, John

    2009-01-01

    Binary black hole mergers are central to many key science objectives of the Laser Interferometer Space Antenna (LISA). For many systems the strongest part of the signal is only understood by numerical simulations. Gravitational wave emissions are understood by simulations of vacuum General Relativity (GR). I discuss numerical simulation results from the perspective of LISA's needs, with indications of work that remains to be done. Some exciting scientific opportunities associated with LISA observations would be greatly enhanced if prompt electromagnetic signature could be associated. I discuss simulations to explore this possibility. Numerical simulations are important now for clarifying LISA's science potential and planning the mission. We also consider how numerical simulations might be applied at the time of LISA's operation.

  3. Simulation of the infiltration process of a ceramic open-pore body with a metal alloy in semi-solid state to design the manufacturing of interpenetrating phase composites

    NASA Astrophysics Data System (ADS)

    Schomer, Laura; Liewald, Mathias; Riedmüller, Kim Rouven

    2018-05-01

    Metal-ceramic Interpenetrating Phase Composites (IPC) belong to a special subcategory of composite materials and reveal enhanced properties compared to conventional composite materials. Currently, IPC are produced by infiltration of a ceramic open-pore body with liquid metal applying high pressure and I or high temperature to avoid residual porosity. However, these IPC are not able to gain their complete potential, because of structural damages and interface reactions occurring during the manufacturing process. Compared to this, the manufacturing of IPC using the semi-solid forming technology offers great perspectives due to relative low processing temperatures and reduced mechanical pressure. In this context, this paper is focusing on numerical investigations conducted by using the FLOW-3D software for gaining a deeper understanding of the infiltration of open-pore bodies with semi-solid materials. For flow simulation analysis, a geometric model and different porous media drag models have been used. They have been adjusted and compared to get a precise description of the infiltration process. Based on these fundamental numerical investigations, this paper also shows numerical investigations that were used for basically designing a semi-solid forming tool. Thereby, the development of the flow front and the pressure during the infiltration represent the basis of the evaluation. The use of an open and closed tool cavity combined with various geometries of the upper die shows different results relating to these evaluation arguments. Furthermore, different overflows were designed and its effects on the pressure at the end of the infiltration process were investigated. Thus, this paper provides a general guideline for a tool design for manufacturing of metal-ceramic IPC using semi-solid forming.

  4. Comparative Laboratory and Numerical Simulations of Shearing Granular Fault Gouge: Micromechanical Processes

    NASA Astrophysics Data System (ADS)

    Morgan, J. K.; Marone, C. J.; Guo, Y.; Anthony, J. L.; Knuth, M. W.

    2004-12-01

    Laboratory studies of granular shear zones have provided significant insight into fault zone processes and the mechanics of earthquakes. The micromechanisms of granular deformation are more difficult to ascertain, but have been hypothesized based on known variations in boundary conditions, particle properties and geometries, and mechanical behavior. Numerical simulations using particle dynamics methods (PDM) can offer unique views into deforming granular shear zones, revealing the precise details of granular microstructures, particle interactions, and packings, which can be correlated with macroscopic mechanical behavior. Here, we describe a collaborative program of comparative laboratory and numerical experiments of granular shear using idealized materials, i.e., glass beads, glass rods or pasta, and angular sand. Both sets of experiments are carried out under similar initial and boundary conditions in a non-fracturing stress regime. Phenomenologically, the results of the two sets of experiments are very similar. Peak friction values vary as a function of particle dimensionality (1-D vs. 2-D vs. 3-D), particle angularity, particle size and size distributions, boundary roughness, and shear zone thickness. Fluctuations in shear strength during an experiment, i.e., stick-slip events, can be correlated with distinct changes in the nature, geometries, and durability of grain bridges that support the shear zone walls. Inclined grain bridges are observed to form, and to support increasing loads, during gradual increases in assemblage strength. Collapse of an individual grain bridge leads to distinct localization of strain, generating a rapidly propagating shear surface that cuts across multiple grain bridges, accounting for the sudden drop in strength. The distribution of particle sizes within an assemblage, along with boundary roughness and its periodicity, influence the rate of formation and dissipation of grain bridges, thereby controlling friction variations during shear.

  5. Heads-Up Display with Virtual Precision Approach Path Indicator as Implemented in a Real-Time Piloted Lifting-Body Simulation

    NASA Technical Reports Server (NTRS)

    Neuhaus, Jason R.

    2018-01-01

    This document describes the heads-up display (HUD) used in a piloted lifting-body entry, approach and landing simulation developed for the simulator facilities of the Simulation Development and Analysis Branch (SDAB) at NASA Langley Research Center. The HUD symbology originated with the piloted simulation evaluations of the HL-20 lifting body concept conducted in 1989 at NASA Langley. The original symbology was roughly based on Shuttle HUD symbology, as interpreted by Langley researchers. This document focuses on the addition of the precision approach path indicator (PAPI) lights to the HUD overlay.

  6. High-numerical-aperture cryogenic light microscopy for increased precision of superresolution reconstructions

    PubMed Central

    Nahmani, Marc; Lanahan, Conor; DeRosier, David; Turrigiano, Gina G.

    2017-01-01

    Superresolution microscopy has fundamentally altered our ability to resolve subcellular proteins, but improving on these techniques to study dense structures composed of single-molecule-sized elements has been a challenge. One possible approach to enhance superresolution precision is to use cryogenic fluorescent imaging, reported to reduce fluorescent protein bleaching rates, thereby increasing the precision of superresolution imaging. Here, we describe an approach to cryogenic photoactivated localization microscopy (cPALM) that permits the use of a room-temperature high-numerical-aperture objective lens to image frozen samples in their native state. We find that cPALM increases photon yields and show that this approach can be used to enhance the effective resolution of two photoactivatable/switchable fluorophore-labeled structures in the same frozen sample. This higher resolution, two-color extension of the cPALM technique will expand the accessibility of this approach to a range of laboratories interested in more precise reconstructions of complex subcellular targets. PMID:28348224

  7. Numerical Analysis of Constrained Dynamical Systems, with Applications to Dynamic Contact of Solids, Nonlinear Elastodynamics and Fluid-Structure Interactions

    DTIC Science & Technology

    2000-12-01

    Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III

  8. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  9. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  10. Exact and approximate solutions for transient squeezing flow

    NASA Astrophysics Data System (ADS)

    Lang, Ji; Santhanam, Sridhar; Wu, Qianhong

    2017-10-01

    In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature and will have a broad impact on industrial and biomedical applications.

  11. A General-applications Direct Global Matrix Algorithm for Rapid Seismo-acoustic Wavefield Computations

    NASA Technical Reports Server (NTRS)

    Schmidt, H.; Tango, G. J.; Werby, M. F.

    1985-01-01

    A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.

  12. A variational algebraic method used to study the full vibrational spectra and dissociation energies of some specific diatomic systems.

    PubMed

    Zhang, Yi; Sun, Weiguo; Fu, Jia; Fan, Qunchao; Ma, Jie; Xiao, Liantuan; Jia, Suotang; Feng, Hao; Li, Huidong

    2014-01-03

    The algebraic method (AM) proposed by Sun et al. is improved to be a variational AM (VAM) to offset the possible experimental errors and to adapt to the individual energy expansion nature of different molecular systems. The VAM is used to study the full vibrational spectra {Eυ} and the dissociation energies De of (4)HeH(+)-X(1)Σ(+), (7)Li2-1(3)Δg,Na2-C(1)Πu,NaK-7(1)Π, Cs2-B(1)Πu and (79)Br2-β1g((3)P2) diatomic electronic states. The results not only precisely reproduce all known experimental vibrational energies, but also predict correct dissociation energies and all unknown high-lying levels that may not be given by the original AM or other numerical methods or experimental methods. The analyses and the skill suggested here might be useful for other numerical simulations and theoretical fittings using known data that may carry inevitable errors. Copyright © 2013. Published by Elsevier B.V.

  13. Scaling analysis of Anderson localizing optical fibers

    NASA Astrophysics Data System (ADS)

    Abaie, Behnam; Mafi, Arash

    2017-02-01

    Anderson localizing optical fibers (ALOF) enable a novel optical waveguiding mechanism; if a narrow beam is scanned across the input facet of the disordered fiber, the output beam follows the transverse position of the incoming wave. Strong transverse disorder induces several localized modes uniformly spread across the transverse structure of the fiber. Each localized mode acts like a transmission channel which carries a narrow input beam along the fiber without transverse expansion. Here, we investigate scaling of transverse size of the localized modes of ALOF with respect to transverse dimensions of the fiber. Probability density function (PDF) of the mode-area is applied and it is shown that PDF converges to a terminal shape at transverse dimensions considerably smaller than the previous experimental implementations. Our analysis turns the formidable numerical task of ALOF simulations into a much simpler problem, because the convergence of mode-area PDF to a terminal shape indicates that a much smaller disordered fiber, compared to previous numerical and experimental implementations, provides all the statistical information required for the precise analysis of the fiber.

  14. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.

    PubMed

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-08-14

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  15. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms

    PubMed Central

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-01-01

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203

  16. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  17. A positive and entropy-satisfying finite volume scheme for the Baer-Nunziato model

    NASA Astrophysics Data System (ADS)

    Coquel, Frédéric; Hérard, Jean-Marc; Saleh, Khaled

    2017-02-01

    We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer-Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in [16] for the isentropic Baer-Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound are also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer-Nunziato model, namely Schwendeman-Wahle-Kapila's Godunov-type scheme [39] and Tokareva-Toro's HLLC scheme [44]. The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.

  18. A positive and entropy-satisfying finite volume scheme for the Baer–Nunziato model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coquel, Frédéric, E-mail: frederic.coquel@cmap.polytechnique.fr; Hérard, Jean-Marc, E-mail: jean-marc.herard@edf.fr; Saleh, Khaled, E-mail: saleh@math.univ-lyon1.fr

    We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer–Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in for the isentropic Baer–Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound aremore » also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer–Nunziato model, namely Schwendeman–Wahle–Kapila's Godunov-type scheme and Tokareva–Toro's HLLC scheme . The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.« less

  19. Comparison of theory and direct numerical simulations of drag reduction by rodlike polymers in turbulent channel flows.

    PubMed

    Benzi, Roberto; Ching, Emily S C; De Angelis, Elisabetta; Procaccia, Itamar

    2008-04-01

    Numerical simulations of turbulent channel flows, with or without additives, are limited in the extent of the Reynolds number (Re) and Deborah number (De). The comparison of such simulations to theories of drag reduction, which are usually derived for asymptotically high Re and De, calls for some care. In this paper we present a study of drag reduction by rodlike polymers in a turbulent channel flow using direct numerical simulation and illustrate how these numerical results should be related to the recently developed theory.

  20. Accurate computation of gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.

  1. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  2. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    PubMed

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  3. Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.

    PubMed

    Jaspersen, Johannes G; Montibeller, Gilberto

    2015-07-01

    Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.

  4. Capability for Fine Tuning of the Refractive Index Sensing Properties of Long-Period Gratings by Atomic Layer Deposited Al2O3 Overlays

    PubMed Central

    Śmietana, Mateusz; Myśliwiec, Marcin; Mikulic, Predrag; Witkowski, Bartłomiej S.; Bock, Wojtek J.

    2013-01-01

    This work presents an application of thin aluminum oxide (Al2O3) films obtained using atomic layer deposition (ALD) for fine tuning the spectral response and refractive-index (RI) sensitivity of long-period gratings (LPGs) induced in optical fibers. The technique allows for an efficient and well controlled deposition at monolayer level (resolution ∼ 0.12 nm) of excellent quality nano-films as required for optical sensors. The effect of Al2O3 deposition on the spectral properties of the LPGs is demonstrated experimentally and numerically. We correlated both the increase in Al2O3 thickness and changes in optical properties of the film with the shift of the LPG resonance wavelength and proved that similar films are deposited on fibers and oxidized silicon reference samples in the same process run. Since the thin overlay effectively changes the distribution of the cladding modes and thus also tunes the device's RI sensitivity, the tuning can be simply realized by varying number of cycles, which is proportional to thickness of the high-refractive-index (n > 1.6 in infrared spectral range) Al2O3 film. The advantage of this approach is the precision in determining the film properties resulting in RI sensitivity of the LPGs. To the best of our knowledge, this is the first time that an ultra-precise method for overlay deposition has been applied on LPGs for RI tuning purposes and the results have been compared with numerical simulations based on LP mode approximation.

  5. Simulation of automatic precision departures and missed approaches using the microwave landing system

    NASA Technical Reports Server (NTRS)

    Feather, J. B.

    1987-01-01

    Results of simulated precision departures and missed approaches using MLS guidance concepts are presented. The study was conducted under the Terminal Configured Vehicle (TCV) Program, and is an extension of previous work by DAC under the Advanced Transport Operating System (ATOPS) Technology Studies Program. The study model included simulation of an MD-80 aircraft, an autopilot, and a MLS guidance computer that provided lateral and vertical steering commands. Precision departures were evaluated using a noise abatement procedure. Several curved path departures were simulated with MLS noise and under various environmental conditions. Missed approaches were considered for the same runway, where lateral MLS guidance maintained the aircraft along the extended runway centerline. In both the departures and the missed approach cases, pitch autopilot takeoff and go-around modes of operation were used in conjunction with MLS lateral guidance.

  6. Representations of numerical and non-numerical magnitude both contribute to mathematical competence in children.

    PubMed

    Lourenco, Stella F; Bonny, Justin W

    2017-07-01

    A growing body of evidence suggests that non-symbolic representations of number, which humans share with nonhuman animals, are functionally related to uniquely human mathematical thought. Other research suggesting that numerical and non-numerical magnitudes not only share analog format but also form part of a general magnitude system raises questions about whether the non-symbolic basis of mathematical thinking is unique to numerical magnitude. Here we examined this issue in 5- and 6-year-old children using comparison tasks of non-symbolic number arrays and cumulative area as well as standardized tests of math competence. One set of findings revealed that scores on both magnitude comparison tasks were modulated by ratio, consistent with shared analog format. Moreover, scores on these tasks were moderately correlated, suggesting overlap in the precision of numerical and non-numerical magnitudes, as expected under a general magnitude system. Another set of findings revealed that the precision of both types of magnitude contributed shared and unique variance to the same math measures (e.g. calculation and geometry), after accounting for age and verbal competence. These findings argue against an exclusive role for non-symbolic number in supporting early mathematical understanding. Moreover, they suggest that mathematical understanding may be rooted in a general system of magnitude representation that is not specific to numerical magnitude but that also encompasses non-numerical magnitude. © 2016 John Wiley & Sons Ltd.

  7. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  8. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  9. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1989-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.

  10. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1992-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.

  11. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE PAGES

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    2017-11-20

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  12. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1994-01-01

    New calculations of the acoustic wave energy fluxes generated in the solar convective zone have been performed. The treatment of convective turbulence in the sun and solar-like stars, in particular, the precise nature of the turbulent power spectrum has been recognized as one of the most important issues in the wave generation problem. Several different functional forms for spatial and temporal spectra have been considered in the literature and differences between the energy fluxes obtained for different forms often exceed two orders of magnitude. The basic criterion for choosing the appropriate spectrum was the maximal efficiency of the wave generation. We have used a different approach based on physical and empirical arguments as well as on some results from numerical simulation of turbulent convection.

  13. Adaptive backstepping control of train systems with traction/braking dynamics and uncertain resistive forces

    NASA Astrophysics Data System (ADS)

    Song, Qi; Song, Y. D.; Cai, Wenchuan

    2011-09-01

    Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.

  14. Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.

    PubMed

    Arunkumar, A; Sakthivel, R; Mathiyalagan, K; Park, Ju H

    2014-07-01

    This paper focuses the issue of robust stochastic stability for a class of uncertain fuzzy Markovian jumping discrete-time neural networks (FMJDNNs) with various activation functions and mixed time delay. By employing the Lyapunov technique and linear matrix inequality (LMI) approach, a new set of delay-dependent sufficient conditions are established for the robust stochastic stability of uncertain FMJDNNs. More precisely, the parameter uncertainties are assumed to be time varying, unknown and norm bounded. The obtained stability conditions are established in terms of LMIs, which can be easily checked by using the efficient MATLAB-LMI toolbox. Finally, numerical examples with simulation result are provided to illustrate the effectiveness and less conservativeness of the obtained results. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Symmetry breaking in linear multipole traps

    NASA Astrophysics Data System (ADS)

    Pedregosa-Gutierrez, J.; Champenois, C.; Kamsap, M. R.; Hagel, G.; Houssin, M.; Knoop, M.

    2018-03-01

    Radiofrequency multipole traps have been used for some decades in cold collision experiments and are gaining interest for precision spectroscopy due to their low micromotion contribution and the predicted unusual cold-ion structures. However, the experimental realisation is not yet fully controlled, and open questions in the operation of these devices remain. We present experimental observations of symmetry breaking of the trapping potential in a macroscopic octupole trap with laser-cooled ions. Numerical simulations have been performed in order to explain the appearance of additional local potential minima and be able to control them in a next step. We characterise these additional potential minima, in particular with respect to their position, their potential depth and their probability of population as a function of the radial and angular displacement of the trapping rods.

  16. Raman lidar for hydrogen gas concentration monitoring and future radioactive waste management.

    PubMed

    Liméry, Anasthase; Cézard, Nicolas; Fleury, Didier; Goular, Didier; Planchat, Christophe; Bertrand, Johan; Hauchecorne, Alain

    2017-11-27

    A multi-channel Raman lidar has been developed, allowing for the first time simultaneous and high-resolution profiling of hydrogen gas and water vapor. The lidar measures vibrational Raman scattering in the UV (355 nm) domain. It works in a high-bandwidth photon counting regime using fast SiPM detectors and takes into account the spectral overlap between hydrogen and water vapor Raman spectra. Measurement of concentration profiles of H 2 and H 2 O are demonstrated along a 5-meter-long open gas cell with 1-meter resolution at 85 meters. The instrument precision is investigated by numerical simulation to anticipate the potential performance at longer range. This lidar could find applications in the French project Cigéo for monitoring radioactive waste disposal cells.

  17. Lunar surface structural concepts and construction studies

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin

    1991-01-01

    The topics are presented in viewgraph form and include the following: lunar surface structures construction research areas; lunar crane related disciplines; shortcomings of typical mobile crane in lunar base applications; candidate crane cable suspension systems; NIST six-cable suspension crane; numerical example of natural frequency; the incorporation of two new features for improved performance of the counter-balanced actively-controlled lunar crane; lunar crane pendulum mechanics; simulation results; 1/6 scale lunar crane testbed using GE robot for global manipulation; basic deployable truss approaches; bi-pantograph elevator platform; comparison of elevator platforms; perspective of bi-pantograph beam; bi-pantograph synchronously deployable tower/beam; lunar module off-loading concept; module off-loader concept packaged; starburst deployable precision reflector; 3-ring reflector deployment scheme; cross-section of packaged starburst reflector; and focal point and thickness packaging considerations.

  18. Holography of Wi-fi Radiation.

    PubMed

    Holl, Philipp M; Reinhard, Friedemann

    2017-05-05

    Wireless data transmission systems such as wi-fi or Bluetooth emit coherent light-electromagnetic waves with a precisely known amplitude and phase. Propagating in space, this radiation forms a hologram-a two-dimensional wave front encoding a three-dimensional view of all objects traversed by the light beam. Here we demonstrate a scheme to record this hologram in a phase-coherent fashion across a meter-sized imaging region. We recover three-dimensional views of objects and emitters by feeding the resulting data into digital reconstruction algorithms. Employing a digital implementation of dark-field propagation to suppress multipath reflection, we significantly enhance the quality of the resulting images. We numerically simulate the hologram of a 10-m-sized building, finding that both localization of emitters and 3D tomography of absorptive objects could be feasible by this technique.

  19. Wlan-Based Indoor Localization Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Saleem, Fasiha; Wyne, Shurjeel

    2016-07-01

    Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.

  20. Holography of Wi-fi Radiation

    NASA Astrophysics Data System (ADS)

    Holl, Philipp M.; Reinhard, Friedemann

    2017-05-01

    Wireless data transmission systems such as wi-fi or Bluetooth emit coherent light—electromagnetic waves with a precisely known amplitude and phase. Propagating in space, this radiation forms a hologram—a two-dimensional wave front encoding a three-dimensional view of all objects traversed by the light beam. Here we demonstrate a scheme to record this hologram in a phase-coherent fashion across a meter-sized imaging region. We recover three-dimensional views of objects and emitters by feeding the resulting data into digital reconstruction algorithms. Employing a digital implementation of dark-field propagation to suppress multipath reflection, we significantly enhance the quality of the resulting images. We numerically simulate the hologram of a 10-m-sized building, finding that both localization of emitters and 3D tomography of absorptive objects could be feasible by this technique.

Top