Sample records for rigorous numerical simulation

  1. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  2. Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang

    2014-08-01

    We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.

  3. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  4. Rigorous simulations of a helical core fiber by the use of transformation optics formalism.

    PubMed

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2014-09-22

    We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.

  5. Numerical Simulation of Partially-Coherent Broadband Optical Imaging Using the FDTD Method

    PubMed Central

    Çapoğlu, İlker R.; White, Craig A.; Rogers, Jeremy D.; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2012-01-01

    Rigorous numerical modeling of optical systems has attracted interest in diverse research areas ranging from biophotonics to photolithography. We report the full-vector electromagnetic numerical simulation of a broadband optical imaging system with partially-coherent and unpolarized illumination. The scattering of light from the sample is calculated using the finite-difference time-domain (FDTD) numerical method. Geometrical optics principles are applied to the scattered light to obtain the intensity distribution at the image plane. Multilayered object spaces are also supported by our algorithm. For the first time, numerical FDTD calculations are directly compared to and shown to agree well with broadband experimental microscopy results. PMID:21540939

  6. Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets

    NASA Astrophysics Data System (ADS)

    Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo

    2017-11-01

    Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.

  7. On the Modeling of Shells in Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Bauchau, Olivier A.; Choi, Jou-Young; Bottasso, Carlo L.

    2000-01-01

    Energy preserving/decaying schemes are presented for the simulation of the nonlinear multibody systems involving shell components. The proposed schemes are designed to meet four specific requirements: unconditional nonlinear stability of the scheme, a rigorous treatment of both geometric and material nonlinearities, exact satisfaction of the constraints, and the presence of high frequency numerical dissipation. The kinematic nonlinearities associated with arbitrarily large displacements and rotations of shells are treated in a rigorous manner, and the material nonlinearities can be handled when the, constitutive laws stem from the existence of a strain energy density function. The efficiency and robustness of the proposed approach is illustrated with specific numerical examples that also demonstrate the need for integration schemes possessing high frequency numerical dissipation.

  8. Numerical Simulations of Turbulent Trapping in the Weak Beam-Plasma Instability

    DTIC Science & Technology

    1986-06-05

    vdi -{l- cos^)a2 ) H(^, u) = cose, (33) where 6r = 0(’yi,^/uti) symbolizes the importance of the time derivative: Eq.(33) is in fact rigorous only...Spectra, Dover, 1985. TABLE 1: Simulation Parameters. Simulation 1 2 ^’, X 7v„ 2048 X 200 1024 X 250 Az At; At 0.5 0.0232 0.2 0.5 0.00723

  9. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    PubMed Central

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-01-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570

  10. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-07-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.

  11. On analyticity of linear waves scattered by a layered medium

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2017-10-01

    The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.

  12. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  13. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  14. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  15. Developments in optical modeling methods for metrology

    NASA Astrophysics Data System (ADS)

    Davidson, Mark P.

    1999-06-01

    Despite the fact that in recent years the scanning electron microscope has come to dominate the linewidth measurement application for wafer manufacturing, there are still many applications for optical metrology and alignment. These include mask metrology, stepper alignment, and overlay metrology. Most advanced non-optical lithographic technologies are also considering using topics for alignment. In addition, there have been a number of in-situ technologies proposed which use optical measurements to control one aspect or another of the semiconductor process. So optics is definitely not dying out in the semiconductor industry. In this paper a description of recent advances in optical metrology and alignment modeling is presented. The theory of high numerical aperture image simulation for partially coherent illumination is discussed. The implications of telecentric optics on the image simulation is also presented. Reciprocity tests are proposed as an important measure of numerical accuracy. Diffraction efficiencies for chrome gratings on reticles are one good way to test Kirchoff's approximation as compared to rigorous calculations. We find significant differences between the predictions of Kirchoff's approximation and rigorous methods. The methods for simulating brightfield, confocal, and coherence probe microscope imags are outlined, as are methods for describing aberrations such as coma, spherical aberration, and illumination aperture decentering.

  16. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  17. Numerical and Experimental Study on Hydrodynamic Performance of A Novel Semi-Submersible Concept

    NASA Astrophysics Data System (ADS)

    Gao, Song; Tao, Long-bin; Kou, Yu-feng; Lu, Chao; Sun, Jiang-long

    2018-04-01

    Multiple Column Platform (MCP) semi-submersible is a newly proposed concept, which differs from the conventional semi-submersibles, featuring centre column and middle pontoon. It is paramount to ensure its structural reliability and safe operation at sea, and a rigorous investigation is conducted to examine the hydrodynamic and structural performance for the novel structure concept. In this paper, the numerical and experimental studies on the hydrodynamic performance of MCP are performed. Numerical simulations are conducted in both the frequency and time domains based on 3D potential theory. The numerical models are validated by experimental measurements obtained from extensive sets of model tests under both regular wave and irregular wave conditions. Moreover, a comparative study on MCP and two conventional semi-submersibles are carried out using numerical simulation. Specifically, the hydrodynamic characteristics, including hydrodynamic coefficients, natural periods and motion response amplitude operators (RAOs), mooring line tension are fully examined. The present study proves the feasibility of the novel MCP and demonstrates the potential possibility of optimization in the future study.

  18. Rigorous vector wave propagation for arbitrary flat media

    NASA Astrophysics Data System (ADS)

    Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.

    2017-08-01

    Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.

  19. Numerical simulation of a shear-thinning fluid through packed spheres

    NASA Astrophysics Data System (ADS)

    Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol

    2012-12-01

    Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.

  20. Approaching the investigation of plasma turbulence through a rigorous verification and validation procedure: A practical example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.

    In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less

  1. The evolution of stable magnetic fields in stars: an analytical approach

    NASA Astrophysics Data System (ADS)

    Mestel, Leon; Moss, David

    2010-07-01

    The absence of a rigorous proof of the existence of dynamically stable, large-scale magnetic fields in radiative stars has been for many years a missing element in the fossil field theory for the magnetic Ap/Bp stars. Recent numerical simulations, by Braithwaite & Spruit and Braithwaite & Nordlund, have largely filled this gap, demonstrating convincingly that coherent global scale fields can survive for times of the order of the main-sequence lifetimes of A stars. These dynamically stable configurations take the form of magnetic tori, with linked poloidal and toroidal fields, that slowly rise towards the stellar surface. This paper studies a simple analytical model of such a torus, designed to elucidate the physical processes that govern its evolution. It is found that one-dimensional numerical calculations reproduce some key features of the numerical simulations, with radiative heat transfer, Archimedes' principle, Lorentz force and Ohmic decay all playing significant roles.

  2. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  3. A Study of the Behavior and Micromechanical Modelling of Granular Soil. Volume 3. A Numerical Investigation of the Behavior of Granular Media Using Nonlinear Discrete Element Simulation

    DTIC Science & Technology

    1991-05-22

    plasticity, including those of DiMaggio and Sandier (1971), Baladi and Rohani (1979), Lade (1977), Prevost (1978, 1985), Dafalias and Herrmann (1982). In...distribution can be achieved only if the behavior at the contact is fully understood and rigorously modelled. 18 REFERENCES Baladi , G.Y. and Rohani, B. (1979

  4. Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications

    PubMed Central

    Han, Katherine; Chang, Chih-Hung

    2014-01-01

    This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287

  5. Simulation of Plasma Jet Merger and Liner Formation within the PLX- α Project

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Chen, Hsin-Chiang; Shih, Wen; Hsu, Scott

    2015-11-01

    Detailed numerical studies of the propagation and merger of high Mach number argon plasma jets and the formation of plasma liners have been performed using the newly developed method of Lagrangian particles (LP). The LP method significantly improves accuracy and mathematical rigor of common particle-based numerical methods such as smooth particle hydrodynamics while preserving their main advantages compared to grid-based methods. A brief overview of the LP method will be presented. The Lagrangian particle code implements main relevant physics models such as an equation of state for argon undergoing atomic physics transformation, radiation losses in thin optical limit, and heat conduction. Simulations of the merger of two plasma jets are compared with experimental data from past PLX experiments. Simulations quantify the effect of oblique shock waves, ionization, and radiation processes on the jet merger process. Results of preliminary simulations of future PLX- alpha experiments involving the ~ π / 2 -solid-angle plasma-liner configuration with 9 guns will also be presented. Partially supported by ARPA-E's ALPHA program.

  6. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  7. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  8. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  9. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    DOE PAGES

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less

  10. Numerical simulation of crevice corrosion of titanium: Effect of the bold surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evitts, R.W.; Postlethwaite, J.; Watson, M.K.

    1996-12-01

    A rigorous crevice corrosion model has been developed that accounts for the bold metal surfaces exterior to the crevice. The model predicts the time change in concentration of all specified chemical species in the crevice and bulk solution, and has the ability to predict active corrosion. It is applied to the crevice corrosion of a small titanium crevice in both oxygenated and anaerobic sodium chloride solutions. The numerical predictions confirm that oxygen is the driving force for crevice corrosion. During the simulations where oxygen is initially present in both the crevice and bulk solution an acidic chloride solution is developed;more » this is the precursor required for crevice corrosion. The anaerobic case displays no tendency to form such a solution. It is also confirmed that those areas in the crevice that are deoxygenated become anodic and the bold metal surface becomes cathodic. As expected, active corrosion is not attained as the simulations are based on electrochemical and chemical parameters at 25 C.« less

  11. SBEACH: Numerical Model for Simulating Storm-Induced Beach Change. Report 1. Empirical Foundation and Model Development

    DTIC Science & Technology

    1989-07-01

    such as the complex fluid motion over aii irregular bottom and absence of rigorous descriptions of broken waves and sediment-sediment interaction, also...prototype-scale conditions. The tests were carried out with both monochromatic and irregular waves for a dunelike foreshore with and without a...significant surf zone. For one case starting from a beach without "fore- shore," monochromatic waves produced a bar, whereas irregular waves of significant

  12. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  13. Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.

    PubMed

    Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D

    2017-02-01

    We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.

  14. Equilibrium E × B Flows in Nonlinear Gyrofluid Flux-Tube Simulations

    NASA Astrophysics Data System (ADS)

    Beer, M. A.; Hammett, G. W.

    2000-10-01

    Comparisons of theory with experiment often indicate levels of sheared E × B flow large enough to significantly suppress turbulence, especially when local transport barriers are formed. We extend our previous simulations by including equilibrium scale sheared E × B flow directly, by introducing a coordinate transformation which shears the simulation domain with the equilibrium E × B flow, while preserving smooth statistical periodicity across the radial domain. This method was used linearly in our previous comparisons with JET [Beer, Budny, Challis, et al., EPS (1999)] and is now applied to nonlinear simulations. This method makes use of some tricks suggested for this problem by Dimits [Int. Conf. on Numerical Simulation of Plasmas (1994)] based on special properties of discrete Fourier transforms. A similar coordinate transformation was previously used successfully by Waltz, et al. [Phys. Plasmas 5, 1784 (1998)], and we confirm their finding that the turbulence is suppressed when the shearing rate, ω_E, is comparable to the maximum linear growth rate in the absence of sheared flow, γ_lin. This is often significantly different than the threshold for linear suppression. With this extension, our simulations are able to address transport barriers from a more rigorous footing. Of particular interest will be the investigation of the expansion or propagation of barriers, where E × B shear suppression is by definition at the marginal point. In addition, our formulation uses general magnetic geometry, so we can rigorously investigate various geometrical effects (e.g. hats, Δ', κ) on the threshold for suppression.

  15. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  16. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, M.D., E-mail: mdmeyers@physics.ucla.edu; Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095; Huang, C.-K., E-mail: huangck@lanl.gov

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTDmore » scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.« less

  17. Numerical and experimental investigation of light trapping effect of nanostructured diatom frustules

    NASA Astrophysics Data System (ADS)

    Chen, Xiangfan; Wang, Chen; Baker, Evan; Sun, Cheng

    2015-07-01

    Recent advances in nanophotonic light-trapping technologies offer promising solutions in developing high-efficiency thin-film solar cells. However, the cost-effective scalable manufacturing of those rationally designed nanophotonic structures remains a critical challenge. In contrast, diatoms, the most common type of phytoplankton found in nature, may offer a very attractive solution. Diatoms exhibit high solar energy harvesting efficiency due to their frustules (i.e., hard porous cell wall made of silica) possessing remarkable hierarchical micro-/nano-scaled features optimized for the photosynthetic process through millions of years of evolution. Here we report numerical and experimental studies to investigate the light-trapping characteristic of diatom frustule. Rigorous coupled wave analysis (RCWA) and finite-difference time-domain (FDTD) methods are employed to investigate the light-trapping characteristics of the diatom frustules. In simulation, placing the diatom frustules on the surface of the light-absorption materials is found to strongly enhance the optical absorption over the visible spectrum. The absorption spectra are also measured experimentally and the results are in good agreement with numerical simulations.

  18. Numerical Approach for Goaf-Side Entry Layout and Yield Pillar Design in Fractured Ground Conditions

    NASA Astrophysics Data System (ADS)

    Jiang, Lishuai; Zhang, Peipeng; Chen, Lianjun; Hao, Zhen; Sainoki, Atsushi; Mitri, Hani S.; Wang, Qingbiao

    2017-11-01

    Entry driven along goaf-side (EDG), which is the development of an entry of the next longwall panel along the goaf-side and the isolation of the entry from the goaf with a small-width yield pillar, has been widely employed in China over the past several decades . The width of such a yield pillar has a crucial effect on EDG layout in terms of the ground control, isolation effect and resource recovery rate. Based on a case study, this paper presents an approach for evaluating, designing and optimizing EDG and yield pillar by considering the results from numerical simulations and field practice. To rigorously analyze the ground stability, the numerical study begins with the simulation of goaf-side stress and ground conditions. Four global models with identical conditions, except for the width of the yield pillar, are built, and the effect of pillar width on ground stability is investigated by comparing aspects of stress distribution, failure propagation, and displacement evolution during the entire service life of the entry. Based on simulation results, the isolation effect of the pillar acquired from field practice is also considered. The suggested optimal yield pillar design is validated using a field test in the same mine. Thus, the presented numerical approach provides references and can be utilized for the evaluation, design and optimization of EDG and yield pillars under similar geological and geotechnical circumstances.

  19. On the possibility of observing bound soliton pairs in a wave-breaking-free mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Martel, G.; Chédot, C.; Réglier, V.; Hideur, A.; Ortaç, B.; Grelu, Ph.

    2007-02-01

    On the basis of numerical simulations, we explain the formation of the stable bound soliton pairs that were experimentally reported in a high-power mode-locked ytterbium fiber laser [Opt. Express 14, 6075 (2006)], in a regime where wave-breaking-free operation is expected. A fully vectorial model allows one to rigorously reproduce the nonmonotonic nature for the nonlinear polarization effect that generally limits the power scalability of a single-pulse self-similar regime. Simulations show that a self-similar regime is not fully obtained, although positive linear chirps and parabolic spectra are always reported. As a consequence, nonvanishing pulse tails allow distant stable binding of highly-chirped pulses.

  20. Rigorous Numerical Study of Low-Period Windows for the Quadratic Map

    NASA Astrophysics Data System (ADS)

    Galias, Zbigniew

    An efficient method to find all low-period windows for the quadratic map is proposed. The method is used to obtain very accurate rigorous bounds of positions of all periodic windows with periods p ≤ 32. The contribution of period-doubling windows on the total width of periodic windows is discussed. Properties of periodic windows are studied numerically.

  1. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  2. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  3. Fully vectorial laser resonator modeling of continuous-wave solid-state lasers including rate equations, thermal lensing and stress-induced birefringence.

    PubMed

    Asoubar, Daniel; Wyrowski, Frank

    2015-07-27

    The computer-aided design of high quality mono-mode, continuous-wave solid-state lasers requires fast, flexible and accurate simulation algorithms. Therefore in this work a model for the calculation of the transversal dominant mode structure is introduced. It is based on the generalization of the scalar Fox and Li algorithm to a fully-vectorial light representation. To provide a flexible modeling concept of different resonator geometries containing various optical elements, rigorous and approximative solutions of Maxwell's equations are combined in different subdomains of the resonator. This approach allows the simulation of plenty of different passive intracavity components as well as active media. For the numerically efficient simulation of nonlinear gain, thermal lensing and stress-induced birefringence effects in solid-state active crystals a semi-analytical vectorial beam propagation method is discussed in detail. As a numerical example the beam quality and output power of a flash-lamp-pumped Nd:YAG laser are improved. To that end we compensate the influence of stress-induced birefringence and thermal lensing by an aspherical mirror and a 90° quartz polarization rotator.

  4. Characterization of the geometry and topology of DNA pictured as a discrete collection of atoms

    PubMed Central

    Olson, Wilma K.

    2014-01-01

    The structural and physical properties of DNA are closely related to its geometry and topology. The classical mathematical treatment of DNA geometry and topology in terms of ideal smooth space curves was not designed to characterize the spatial arrangements of atoms found in high-resolution and simulated double-helical structures. We present here new and rigorous numerical methods for the rapid and accurate assessment of the geometry and topology of double-helical DNA structures in terms of the constituent atoms. These methods are well designed for large DNA datasets obtained in detailed numerical simulations or determined experimentally at high-resolution. We illustrate the usefulness of our methodology by applying it to the analysis of three canonical double-helical DNA chains, a 65-bp minicircle obtained in recent molecular dynamics simulations, and a crystallographic array of protein-bound DNA duplexes. Although we focus on fully base-paired DNA structures, our methods can be extended to treat the geometry and topology of melted DNA structures as well as to characterize the folding of arbitrary molecules such as RNA and cyclic peptides. PMID:24791158

  5. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2008-05-01

    The security of a standard bidirectional “plug-and-play” quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we solve this question directly by presenting the quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard Bennett-Brassard 1984 protocol, weak+vacuum decoy state protocol, and one-decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source.

  6. Accuracy and performance of 3D mask models in optical projection lithography

    NASA Astrophysics Data System (ADS)

    Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar

    2011-04-01

    Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.

  7. Cymatics for the cloaking of flexural vibrations in a structured plate

    PubMed Central

    Misseroni, D.; Colquitt, D. J.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.

    2016-01-01

    Based on rigorous theoretical findings, we present a proof-of-concept design for a structured square cloak enclosing a void in an elastic lattice. We implement high-precision fabrication and experimental testing of an elastic invisibility cloak for flexural waves in a mechanical lattice. This is accompanied by verifications and numerical modelling performed through finite element simulations. The primary advantage of our square lattice cloak, over other designs, is the straightforward implementation and the ease of construction. The elastic lattice cloak, implemented experimentally, shows high efficiency. PMID:27068339

  8. Spontaneous oscillations in microfluidic networks

    NASA Astrophysics Data System (ADS)

    Case, Daniel; Angilella, Jean-Regis; Motter, Adilson

    2017-11-01

    Precisely controlling flows within microfluidic systems is often difficult which typically results in systems being heavily reliant on numerous external pumps and computers. Here, I present a simple microfluidic network that exhibits flow rate switching, bistablity, and spontaneous oscillations controlled by a single pressure. That is, by solely changing the driving pressure, it is possible to switch between an oscillating and steady flow state. Such functionality does not rely on external hardware and may even serve as an on-chip memory or timing mechanism. I use an analytic model and rigorous fluid dynamics simulations to show these results.

  9. Rigor or mortis: best practices for preclinical research in neuroscience.

    PubMed

    Steward, Oswald; Balice-Gordon, Rita

    2014-11-05

    Numerous recent reports document a lack of reproducibility of preclinical studies, raising concerns about potential lack of rigor. Examples of lack of rigor have been extensively documented and proposals for practices to improve rigor are appearing. Here, we discuss some of the details and implications of previously proposed best practices and consider some new ones, focusing on preclinical studies relevant to human neurological and psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Electrodynamic multiple-scattering method for the simulation of optical trapping atop periodic metamaterials

    NASA Astrophysics Data System (ADS)

    Yannopapas, Vassilios; Paspalakis, Emmanuel

    2018-07-01

    We present a new theoretical tool for simulating optical trapping of nanoparticles in the presence of an arbitrary metamaterial design. The method is based on rigorously solving Maxwell's equations for the metamaterial via a hybrid discrete-dipole approximation/multiple-scattering technique and direct calculation of the optical force exerted on the nanoparticle by means of the Maxwell stress tensor. We apply the method to the case of a spherical polystyrene probe trapped within the optical landscape created by illuminating of a plasmonic metamaterial consisting of periodically arranged tapered metallic nanopyramids. The developed technique is ideally suited for general optomechanical calculations involving metamaterial designs and can compete with purely numerical methods such as finite-difference or finite-element schemes.

  11. Survey of computer programs for prediction of crash response and of its experimental validation

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1976-01-01

    The author seeks to critically assess the potentialities of the mathematical and hybrid simulators which predict post-impact response of transportation vehicles. A strict rigorous numerical analysis of a complex phenomenon like crash may leave a lot to be desired with regard to the fidelity of mathematical simulation. Hybrid simulations on the other hand which exploit experimentally observed features of deformations appear to hold a lot of promise. MARC, ANSYS, NONSAP, DYCAST, ACTION, WHAM II and KRASH are among some of the simulators examined for their capabilities with regard to prediction of post impact response of vehicles. A review of these simulators reveals that much more by way of an analysis capability may be desirable than what is currently available. NASA's crashworthiness testing program in conjunction with similar programs of various other agencies, besides generating a large data base, will be equally useful in the validation of new mathematical concepts of nonlinear analysis and in the successful extension of other techniques in crashworthiness.

  12. Highly efficient all-dielectric optical tensor impedance metasurfaces for chiral polarization control.

    PubMed

    Kim, Minseok; Eleftheriades, George V

    2016-10-15

    We propose a highly efficient (nearly lossless and impedance-matched) all-dielectric optical tensor impedance metasurface that mimics chiral effects at optical wavelengths. By cascading an array of rotated crossed silicon nanoblocks, we realize chiral optical tensor impedance metasurfaces that operate as circular polarization selective surfaces. Their efficiencies are maximized through a nonlinear numerical optimization process in which the tensor impedance metasurfaces are modeled via multi-conductor transmission line theory. From rigorous full-wave simulations that include all material losses, we show field transmission efficiencies of 94% for right- and left-handed circular polarization selective surfaces at 800 nm.

  13. Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yiguang Ju; Frederick Dryer

    2009-02-07

    Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.

  14. Toroidal gyrofluid equations for simulations of tokamak turbulence

    NASA Astrophysics Data System (ADS)

    Beer, M. A.; Hammett, G. W.

    1996-11-01

    A set of nonlinear gyrofluid equations for simulations of tokamak turbulence are derived by taking moments of the nonlinear toroidal gyrokinetic equation. The moment hierarchy is closed with approximations that model the kinetic effects of parallel Landau damping, toroidal drift resonances, and finite Larmor radius effects. These equations generalize the work of Dorland and Hammett [Phys. Fluids B 5, 812 (1993)] to toroidal geometry by including essential toroidal effects. The closures for phase mixing from toroidal ∇B and curvature drifts take the basic form presented in Waltz et al. [Phys. Fluids B 4, 3138 (1992)], but here a more rigorous procedure is used, including an extension to higher moments, which provides significantly improved accuracy. In addition, trapped ion effects and collisions are incorporated. This reduced set of nonlinear equations accurately models most of the physics considered important for ion dynamics in core tokamak turbulence, and is simple enough to be used in high resolution direct numerical simulations.

  15. Shear-induced opening of the coronal magnetic field

    NASA Technical Reports Server (NTRS)

    Wolfson, Richard

    1995-01-01

    This work describes the evolution of a model solar corona in response to motions of the footpoints of its magnetic field. The mathematics involved is semianalytic, with the only numerical solution being that of an ordinary differential equation. This approach, while lacking the flexibility and physical details of full MHD simulations, allows for very rapid computation along with complete and rigorous exploration of the model's implications. We find that the model coronal field bulges upward, at first slowly and then more dramatically, in response to footpoint displacements. The energy in the field rises monotonically from that of the initial potential state, and the field configuration and energy appraoch asymptotically that of a fully open field. Concurrently, electric currents develop and concentrate into a current sheet as the limiting case of the open field is approached. Examination of the equations shows rigorously that in the asymptotic limit of the fully open field, the current layer becomes a true ideal MHD singularity.

  16. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  17. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  18. Higher-order compositional modeling of three-phase flow in 3D fractured porous media based on cross-flow equilibrium

    NASA Astrophysics Data System (ADS)

    Moortgat, Joachim; Firoozabadi, Abbas

    2013-10-01

    Numerical simulation of multiphase compositional flow in fractured porous media, when all the species can transfer between the phases, is a real challenge. Despite the broad applications in hydrocarbon reservoir engineering and hydrology, a compositional numerical simulator for three-phase flow in fractured media has not appeared in the literature, to the best of our knowledge. In this work, we present a three-phase fully compositional simulator for fractured media, based on higher-order finite element methods. To achieve computational efficiency, we invoke the cross-flow equilibrium (CFE) concept between discrete fractures and a small neighborhood in the matrix blocks. We adopt the mixed hybrid finite element (MHFE) method to approximate convective Darcy fluxes and the pressure equation. This approach is the most natural choice for flow in fractured media. The mass balance equations are discretized by the discontinuous Galerkin (DG) method, which is perhaps the most efficient approach to capture physical discontinuities in phase properties at the matrix-fracture interfaces and at phase boundaries. In this work, we account for gravity and Fickian diffusion. The modeling of capillary effects is discussed in a separate paper. We present the mathematical framework, using the implicit-pressure-explicit-composition (IMPEC) scheme, which facilitates rigorous thermodynamic stability analyses and the computation of phase behavior effects to account for transfer of species between the phases. A deceptively simple CFL condition is implemented to improve numerical stability and accuracy. We provide six numerical examples at both small and larger scales and in two and three dimensions, to demonstrate powerful features of the formulation.

  19. A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Zhang, Guoyu; Huang, Chengming; Li, Meng

    2018-04-01

    We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.

  20. Meso-beta scale numerical simulation studies of terrain-induced jet streak mass/momentum perturbations

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1994-01-01

    An in-depth analysis of observed gravity waves and their relationship to precipitation bands over the Montana mesonetwork during the 1981 CCOPE case study indicates that there were two episodes of coherent internal gravity waves. One of the fundamental unanswered questions from this research, however, concerns the dynamical processes which generated the observed waves, all of which originated from the region encompassing the borders of Montana, Idaho, and Wyoming. While geostrophic adjustment, shearing instability, and terrain where all implicated separately or in concert as possible wave generation mechanisms, the lack of upper-air data within the wave genesis region made it difficult to rigorously define the genesis processes from observations alone. In this report we employ a mesoscale numerical model to help diagnose the intricate early wave generation mechanisms during the first observed wave episode.

  1. Internal field distribution of a radially inhomogeneous droplet illuminated by an arbitrary shaped beam

    NASA Astrophysics Data System (ADS)

    Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang

    2018-05-01

    Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.

  2. Numerical Simulations of Blood Flows in the Left Atrium

    NASA Astrophysics Data System (ADS)

    Zhang, Lucy

    2008-11-01

    A novel numerical technique of solving complex fluid-structure interactions for biomedical applications is introduced. The method is validated through rigorous convergence and accuracy tests. In this study, the technique is specifically used to study blood flows in the left atrium, one of the four chambers in the heart. Stable solutions are obtained at physiologic Reynolds numbers by applying pulmonary venous inflow, mitral valve outflow and appropriate constitutive equations to closely mimic the behaviors of biomaterials. Atrial contraction is also implemented as a time-dependent boundary condition to realistically describe the atrial wall muscle movements, thus producing accurate interactions with the surrounding blood. From our study, the transmitral velocity, filling/emptying velocity ratio, durations and strengths of vortices are captured numerically for sinus rhythms (healthy heart beat) and they compare quite well with reported clinical studies. The solution technique can be further used to study heart diseases such as the atrial fibrillation, thrombus formation in the chamber and their corresponding effects in blood flows.

  3. Development of the T+M coupled flow–geomechanical simulator to describe fracture propagation and coupled flow–thermal–geomechanical processes in tight/shale gas systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihoon; Moridis, George J.

    2013-10-01

    We developed a hydraulic fracturing simulator by coupling a flow simulator to a geomechanics code, namely T+M simulator. Modeling of the vertical fracture development involves continuous updating of the boundary conditions and of the data connectivity, based on the finite element method for geomechanics. The T+M simulator can model the initial fracture development during the hydraulic fracturing operations, after which the domain description changes from single continuum to double or multiple continua in order to rigorously model both flow and geomechanics for fracture-rock matrix systems. The T+H simulator provides two-way coupling between fluid-heat flow and geomechanics, accounting for thermoporomechanics, treatsmore » nonlinear permeability and geomechanical moduli explicitly, and dynamically tracks changes in the fracture(s) and in the pore volume. We also fully accounts for leak-off in all directions during hydraulic fracturing. We first validate the T+M simulator, matching numerical solutions with the analytical solutions for poromechanical effects, static fractures, and fracture propagations. Then, from numerical simulation of various cases of the planar fracture propagation, shear failure can limit the vertical fracture propagation of tensile failure, because of leak-off into the reservoirs. Slow injection causes more leak-off, compared with fast injection, when the same amount of fluid is injected. Changes in initial total stress and contributions of shear effective stress to tensile failure can also affect formation of the fractured areas, and the geomechanical responses are still well-posed.« less

  4. Direct Numerical Simulations of Autoignition in Stratified Dimethyl-ether (DME)/Air Turbulent Mixtures

    DOE PAGES

    Bansal, Gaurav; Mascarenhas, Ajith; Chen, Jacqueline H.

    2014-10-01

    In our paper, two- and three-dimensional direct numerical simulations (DNS) of autoignition phenomena in stratified dimethyl-ether (DME)/air turbulent mixtures are performed. A reduced DME oxidation mechanism, which was obtained using rigorous mathematical reduction and stiffness removal procedure from a detailed DME mechanism with 55 species, is used in the present DNS. The reduced DME mechanism consists of 30 chemical species. This study investigates the fundamental aspects of turbulence-mixing-autoignition interaction occurring in homogeneous charge compression ignition (HCCI) engine environments. A homogeneous isotropic turbulence spectrum is used to initialize the velocity field in the domain. Moreover, the computational configuration corresponds to amore » constant volume combustion vessel with inert mass source terms added to the governing equations to mimic the pressure rise due to piston motion, as present in practical engines. DME autoignition is found to be a complex three-staged process; each stage corresponds to a distinct chemical kinetic pathway. The distinct role of turbulence and reaction in generating scalar gradients and hence promoting molecular transport processes are investigated. Then, by applying numerical diagnostic techniques, the different heat release modes present in the igniting mixture are identified. In particular, the contribution of homogeneous autoignition, spontaneous ignition front propagation, and premixed deflagration towards the total heat release are quantified.« less

  5. Numerical Modelling of Ground Penetrating Radar Antennas

    NASA Astrophysics Data System (ADS)

    Giannakis, Iraklis; Giannopoulos, Antonios; Pajewski, Lara

    2014-05-01

    Numerical methods are needed in order to solve Maxwell's equations in complicated and realistic problems. Over the years a number of numerical methods have been developed to do so. Amongst them the most popular are the finite element, finite difference implicit techniques, frequency domain solution of Helmontz equation, the method of moments, transmission line matrix method. However, the finite-difference time-domain method (FDTD) is considered to be one of the most attractive choice basically because of its simplicity, speed and accuracy. FDTD first introduced in 1966 by Kane Yee. Since then, FDTD has been established and developed to be a very rigorous and well defined numerical method for solving Maxwell's equations. The order characteristics, accuracy and limitations are rigorously and mathematically defined. This makes FDTD reliable and easy to use. Numerical modelling of Ground Penetrating Radar (GPR) is a very useful tool which can be used in order to give us insight into the scattering mechanisms and can also be used as an alternative approach to aid data interpretation. Numerical modelling has been used in a wide range of GPR applications including archeology, geophysics, forensic, landmine detection etc. In engineering, some applications of numerical modelling include the estimation of the effectiveness of GPR to detect voids in bridges, to detect metal bars in concrete, to estimate shielding effectiveness etc. The main challenges in numerical modelling of GPR for engineering applications are A) the implementation of the dielectric properties of the media (soils, concrete etc.) in a realistic way, B) the implementation of the geometry of the media (soils inhomogeneities, rough surface, vegetation, concrete features like fractures and rock fragments etc.) and C) the detailed modelling of the antenna units. The main focus of this work (which is part of the COST Action TU1208) is the accurate and realistic implementation of GPR antenna units into the FDTD model. Accurate models based on general characteristics of the commercial antennas GSSI 1.5 GHz and MALA 1.2 GHz have been already incorporated in GprMax, a free software which solves Maxwell's equation using a second order in space and time FDTD algorithm. This work presents the implementation of horn antennas with different parameters as well as ridged horn antennas into this FDTD model and their effectiveness is tested in realistic modelled situations. Accurate models of soils and concrete are used to test and compare different antenna units. Stochastic methods are used in order to realistically simulate the geometrical characteristics of the medium. Regarding the dielectric properties, Debye approximations are incorporated in order to simulate realistically the dielectric properties of the medium on the frequency range of interest.

  6. Necromechanics: Death-induced changes in the mechanical properties of human tissues.

    PubMed

    Martins, Pedro A L S; Ferreira, Francisca; Natal Jorge, Renato; Parente, Marco; Santos, Agostinho

    2015-05-01

    After the death phenomenon, the rigor mortis development, characterized by body stiffening, is one of the most evident changes that occur in the body. In this work, the development of rigor mortis was assessed using a skinfold caliper in human cadavers and in live people to measure the deformation in the biceps brachii muscle in response to the force applied by the device. Additionally, to simulate the measurements with the finite element method, a two-dimensional model of an arm section was used. As a result of the experimental procedure, a decrease in deformation with increasing postmortem time was observed, which corresponds to an increase in rigidity. As expected, the deformations for the live subjects were higher. The finite element method analysis showed a correlation between the c1 parameter of the neo-Hookean model in the 4- to 8-h postmortem interval. This was accomplished by adjusting the c1 material parameter in order to simulate the measured experimental displacement. Despite being a preliminary study, the obtained results show that combining the proposed experimental procedure with a numerical technique can be very useful in the study of the postmortem mechanical modifications of human tissues. Moreover, the use of data from living subjects allows us to estimate the time of death paving the way to establish this process as an alternative to the existing techniques. This solution constitutes a portable, non-invasive method of estimating the postmortem interval with direct quantitative measurements using a skinfold caliper. The tools and methods described can be used to investigate the subject and to gain epidemiologic knowledge on rigor mortis phenomenon. © IMechE 2015.

  7. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  8. The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers

    NASA Astrophysics Data System (ADS)

    He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da

    2016-01-01

    The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.

  9. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  10. Deffuant model of opinion formation in one-dimensional multiplex networks

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2015-10-01

    Complex systems in the real world often operate through multiple kinds of links connecting their constituents. In this paper we propose an opinion formation model under bounded confidence over multiplex networks, consisting of edges at different topological and temporal scales. We determine rigorously the critical confidence threshold by exploiting probability theory and network science when the nodes are arranged on the integers, {{Z}}, evolving in continuous time. It is found that the existence of ‘multiplexity’ impedes the convergence, and that working with the aggregated or summarized simplex network is inaccurate since it misses vital information. Analytical calculations are confirmed by extensive numerical simulations.

  11. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  12. Implicit filtered P{sub N} for high-energy density thermal radiation transport using discontinuous Galerkin finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laboure, Vincent M., E-mail: vincent.laboure@tamu.edu; McClarren, Ryan G., E-mail: rgm@tamu.edu; Hauck, Cory D., E-mail: hauckc@ornl.gov

    2016-09-15

    In this work, we provide a fully-implicit implementation of the time-dependent, filtered spherical harmonics (FP{sub N}) equations for non-linear, thermal radiative transfer. We investigate local filtering strategies and analyze the effect of the filter on the conditioning of the system, showing in particular that the filter improves the convergence properties of the iterative solver. We also investigate numerically the rigorous error estimates derived in the linear setting, to determine whether they hold also for the non-linear case. Finally, we simulate a standard test problem on an unstructured mesh and make comparisons with implicit Monte Carlo (IMC) calculations.

  13. High-order FDTD methods for transverse electromagnetic systems in dispersive inhomogeneous media.

    PubMed

    Zhao, Shan

    2011-08-15

    This Letter introduces a novel finite-difference time-domain (FDTD) formulation for solving transverse electromagnetic systems in dispersive media. Based on the auxiliary differential equation approach, the Debye dispersion model is coupled with Maxwell's equations to derive a supplementary ordinary differential equation for describing the regularity changes in electromagnetic fields at the dispersive interface. The resulting time-dependent jump conditions are rigorously enforced in the FDTD discretization by means of the matched interface and boundary scheme. High-order convergences are numerically achieved for the first time in the literature in the FDTD simulations of dispersive inhomogeneous media. © 2011 Optical Society of America

  14. Optical near-field analysis of spherical metals: Application of the FDTD method combined with the ADE method.

    PubMed

    Yamaguchi, Takashi; Hinata, Takashi

    2007-09-03

    The time-average energy density of the optical near-field generated around a metallic sphere is computed using the finite-difference time-domain method. To check the accuracy, the numerical results are compared with the rigorous solutions by Mie theory. The Lorentz-Drude model, which is coupled with Maxwell's equation via motion equations of an electron, is applied to simulate the dispersion relation of metallic materials. The distributions of the optical near-filed generated around a metallic hemisphere and a metallic spheroid are also computed, and strong optical near-fields are obtained at the rim of them.

  15. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  16. Fabrication of SiC membrane HCG blue reflector using nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Lai, Ying-Yu; Matsutani, Akihiro; Lu, Tien-Chang; Wang, Shing-Chung; Koyama, Fumio

    2015-02-01

    We designed and fabricated a suspended SiC-based membrane high contrast grating (HCG) reflectors. The rigorous coupled-wave analysis (RCWA) was employed to verify the structural parameters including grating periods, grating height, filling factors and air-gap height. From the optimized simulation results, the designed SiC-based membrane HCG has a wide reflection stopband (reflectivity (R) <90%) of 135 nm for the TE polarization, which centered at 480 nm. The suspended SiC-based membrane HCG reflectors were fabricated by nanoimprint lithography and two-step etching technique. The corresponding reflectivity was measured by using a micro-reflectivity spectrometer. The experimental results show a high reflectivity (R<90%), which is in good agreement with simulation results. This achievement should have an impact on numerous III-N based photonic devices operating in the blue wavelength or even ultraviolet region.

  17. Model of dissolution in the framework of tissue engineering and drug delivery.

    PubMed

    Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R

    2018-05-22

    Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.

  18. Exploring a multi-scale method for molecular simulation in continuum solvent model: Explicit simulation of continuum solvent as an incompressible fluid.

    PubMed

    Xiao, Li; Luo, Ray

    2017-12-07

    We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.

  19. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  20. Relativistic interpretation of Newtonian simulations for cosmic structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2016-09-01

    The standard numerical tools for studying non-linear collapse of matter are Newtonian N -body simulations. Previous work has shown that these simulations are in accordance with General Relativity (GR) up to first order in perturbation theory, provided that the effects from radiation can be neglected. In this paper we show that the present day matter density receives more than 1% corrections from radiation on large scales if Newtonian simulations are initialised before z =50. We provide a relativistic framework in which unmodified Newtonian simulations are compatible with linear GR even in the presence of radiation. Our idea is to usemore » GR perturbation theory to keep track of the evolution of relativistic species and the relativistic space-time consistent with the Newtonian trajectories computed in N -body simulations. If metric potentials are sufficiently small, they can be computed using a first-order Einstein–Boltzmann code such as CLASS. We make this idea rigorous by defining a class of GR gauges, the Newtonian motion gauges, which are defined such that matter particles follow Newtonian trajectories. We construct a simple example of a relativistic space-time within which unmodified Newtonian simulations can be interpreted.« less

  1. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  2. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less

  3. Simulations of NLC formation using a microphysical model driven by three-dimensional dynamics

    NASA Astrophysics Data System (ADS)

    Kirsch, Annekatrin; Becker, Erich; Rapp, Markus; Megner, Linda; Wilms, Henrike

    2014-05-01

    Noctilucent clouds (NLCs) represent an optical phenomenon occurring in the polar summer mesopause region. These clouds have been known since the late 19th century. Current physical understanding of NLCs is based on numerous observational and theoretical studies, in recent years especially observations from satellites and by lidars from ground. Theoretical studies based on numerical models that simulate NLCs with the underlying microphysical processes are uncommon. Up to date no three-dimensional numerical simulations of NLCs exist that take all relevant dynamical scales into account, i.e., from the planetary scale down to gravity waves and turbulence. Rather, modeling is usually restricted to certain flow regimes. In this study we make a more rigorous attempt and simulate NLC formation in the environment of the general circulation of the mesopause region by explicitly including gravity waves motions. For this purpose we couple the Community Aerosol and Radiation Model for Atmosphere (CARMA) to gravity-wave resolving dynamical fields simulated beforehand with the Kuehlungsborn Mechanistic Circulation Model (KMCM). In our case, the KMCM is run with a horizontal resolution of T120 which corresponds to a minimum horizontal wavelength of 350 km. This restriction causes the resolved gravity waves to be somewhat biased to larger scales. The simulated general circulation is dynamically controlled by these waves in a self-consitent fashion and provides realistic temperatures and wind-fields for July conditions. Assuming a water vapor mixing ratio profile in agreement with current observations results in reasonable supersaturations of up to 100. In a first step, CARMA is applied to a horizontal section covering the Northern hemisphere. The vertical resolution is 120 levels ranging from 72 to 101 km. In this paper we will present initial results of this coupled dynamical microphysical model focussing on the interaction of waves and turbulent diffusion with NLC-microphysics.

  4. Towards a Credibility Assessment of Models and Simulations

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Green, Lawrence L.; Luckring, James M.; Morrison, Joseph H.; Tripathi, Ram K.; Zang, Thomas A.

    2008-01-01

    A scale is presented to evaluate the rigor of modeling and simulation (M&S) practices for the purpose of supporting a credibility assessment of the M&S results. The scale distinguishes required and achieved levels of rigor for a set of M&S elements that contribute to credibility including both technical and process measures. The work has its origins in an interest within NASA to include a Credibility Assessment Scale in development of a NASA standard for models and simulations.

  5. On making cuts for magnetic scalar potentials in multiply connected regions

    NASA Astrophysics Data System (ADS)

    Kotiuga, P. R.

    1987-04-01

    The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.

  6. A new flux-conserving numerical scheme for the steady, incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    1994-01-01

    This paper is concerned with the continued development of a new numerical method, the space-time solution element (STS) method, for solving conservation laws. The present work focuses on the two-dimensional, steady, incompressible Navier-Stokes equations. Using first an integral approach, and then a differential approach, the discrete flux conservation equations presented in a recent paper are rederived. Here a simpler method for determining the flux expressions at cell interfaces is given; a systematic and rigorous derivation of the conditions used to simulate the differential form of the governing conservation law(s) is provided; necessary and sufficient conditions for a discrete approximation to satisfy a conservation law in E2 are derived; and an estimate of the local truncation error is given. A specific scheme is then constructed for the solution of the thin airfoil boundary layer problem. Numerical results are presented which demonstrate the ability of the scheme to accurately resolve the developing boundary layer and wake regions using grids which are much coarser than those employed by other numerical methods. It is shown that ten cells in the cross-stream direction are sufficient to accurately resolve the developing airfoil boundary layer.

  7. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  8. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  9. Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation

    NASA Astrophysics Data System (ADS)

    Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua

    2018-06-01

    Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.

  10. Non-Maxwellian fast particle effects in gyrokinetic GENE simulations

    NASA Astrophysics Data System (ADS)

    Di Siena, A.; Görler, T.; Doerk, H.; Bilato, R.; Citrin, J.; Johnson, T.; Schneider, M.; Poli, E.; JET Contributors

    2018-04-01

    Fast ions have recently been found to significantly impact and partially suppress plasma turbulence both in experimental and numerical studies in a number of scenarios. Understanding the underlying physics and identifying the range of their beneficial effect is an essential task for future fusion reactors, where highly energetic ions are generated through fusion reactions and external heating schemes. However, in many of the gyrokinetic codes fast ions are, for simplicity, treated as equivalent-Maxwellian-distributed particle species, although it is well known that to rigorously model highly non-thermalised particles, a non-Maxwellian background distribution function is needed. To study the impact of this assumption, the gyrokinetic code GENE has recently been extended to support arbitrary background distribution functions which might be either analytical, e.g., slowing down and bi-Maxwellian, or obtained from numerical fast ion models. A particular JET plasma with strong fast-ion related turbulence suppression is revised with these new code capabilities both with linear and nonlinear gyrokinetic simulations. It appears that the fast ion stabilization tends to be less strong but still substantial with more realistic distributions, and this improves the quantitative power balance agreement with experiments.

  11. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    DOE PAGES

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less

  12. Numerical simulation of separated flows. Ph.D. Thesis - Stanford Univ., Calif.

    NASA Technical Reports Server (NTRS)

    Spalart, P. R.; Leonard, A.; Baganoff, D.

    1983-01-01

    A new numerical method, based on the Vortex Method, for the simulation of two-dimensional separated flows, was developed and tested on a wide range of gases. The fluid is incompressible and the Reynolds number is high. A rigorous analytical basis for the representation of the Navier-Stokes equation in terms of the vorticity is used. An equation for the control of circulation around each body is included. An inviscid outer flow (computed by the Vortex Method) was coupled with a viscous boundary layer flow (computed by an Eulerian method). This version of the Vortex Method treats bodies of arbitrary shape, and accurately computes the pressure and shear stress at the solid boundary. These two quantities reflect the structure of the boundary layer. Several versions of the method are presented and applied to various problems, most of which have massive separation. Comparison of its results with other results, generally experimental, demonstrates the reliability and the general accuracy of the new method, with little dependence on empirical parameters. Many of the complex features of the flow past a circular cylinder, over a wide range of Reynolds numbers, are correctly reproduced.

  13. Arbitrarily high-order time-stepping schemes based on the operator spectrum theory for high-dimensional nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Wu, Xinyuan

    2017-07-01

    In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.

  14. MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.

    PubMed

    Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan

    2017-01-01

    Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.

  15. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  16. Transonic flow about a thick circular-arc airfoil

    NASA Technical Reports Server (NTRS)

    Mcdevitt, J. B.; Levy, L. L., Jr.; Deiwert, G. S.

    1975-01-01

    An experimental and theoretical study of transonic flow over a thick airfoil, prompted by a need for adequately documented experiments that could provide rigorous verification of viscous flow simulation computer codes, is reported. Special attention is given to the shock-induced separation phenomenon in the turbulent regime. Measurements presented include surface pressures, streamline and flow separation patterns, and shadowgraphs. For a limited range of free-stream Mach numbers the airfoil flow field is found to be unsteady. Dynamic pressure measurements and high-speed shadowgraph movies were taken to investigate this phenomenon. Comparisons of experimentally determined and numerically simulated steady flows using a new viscous-turbulent code are also included. The comparisons show the importance of including an accurate turbulence model. When the shock-boundary layer interaction is weak the turbulence model employed appears adequate, but when the interaction is strong, and extensive regions of separation are present, the model is inadequate and needs further development.

  17. Non-periodic high-index contrast gratings reflector with large-angle beam forming ability

    NASA Astrophysics Data System (ADS)

    Fang, Wenjing; Huang, Yongqing; Duan, Xiaofeng; Fei, Jiarui; Ren, Xiaomin; Mao, Min

    2016-05-01

    A non-periodic high-index contrast gratings (HCGs) reflector on SOI wafer with large-angle beam forming ability has been proposed and fabricated. The proposed reflector was designed using rigorous coupled-wave analysis (RCWA) and finite-element-method (FEM). A deflection angle of 17.35° and high reflectivity of 92.31% are achieved under transverse magnetic (TM) polarized light in numerical simulation. Experimental results show that the reflected power peaked at 17.2° under a 1550 nm incident light, which is in good accordance with the simulation results. Moreover, the reflected power spectrum was also measured. Under different incident wavelengths around 1550 nm, reflected powers all peaked at 17.2°. The results show that the proposed non-periodic HCGs reflector has a good reflection and beam forming ability in a wavelength range as wide as 40 nm around 1550 nm.

  18. Simulations of Scatterometry Down to 22 nm Structure Sizes and Beyond with Special Emphasis on LER

    NASA Astrophysics Data System (ADS)

    Osten, W.; Ferreras Paz, V.; Frenner, K.; Schuster, T.; Bloess, H.

    2009-09-01

    In recent years, scatterometry has become one of the most commonly used methods for CD metrology. With decreasing structure size for future technology nodes, the search for optimized scatterometry measurement configurations gets more important to exploit maximum sensitivity. As widespread industrial scatterometry tools mainly still use a pre-set measurement configuration, there are still free parameters to improve sensitivity. Our current work uses a simulation based approach to predict and optimize sensitivity of future technology nodes. Since line edge roughness is getting important for such small structures, these imperfections of the periodic continuation cannot be neglected. Using fourier methods like e.g. rigorous coupled wave approach (RCWA) for diffraction calculus, nonperiodic features are hard to reach. We show that in this field certain types of fieldstitching methods show nice numerical behaviour and lead to useful results.

  19. Extension of the Viscous Collision Limiting Direct Simulation Monte Carlo Technique to Multiple Species

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Burt, Jonathan M.

    2016-01-01

    There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.

  20. Sonoporation at Small and Large Length Scales: Effect of Cavitation Bubble Collapse on Membranes.

    PubMed

    Fu, Haohao; Comer, Jeffrey; Cai, Wensheng; Chipot, Christophe

    2015-02-05

    Ultrasound has emerged as a promising means to effect controlled delivery of therapeutic agents through cell membranes. One possible mechanism that explains the enhanced permeability of lipid bilayers is the fast contraction of cavitation bubbles produced on the membrane surface, thereby generating large impulses, which, in turn, enhance the permeability of the bilayer to small molecules. In the present contribution, we investigate the collapse of bubbles of different diameters, using atomistic and coarse-grained molecular dynamics simulations to calculate the force exerted on the membrane. The total impulse can be computed rigorously in numerical simulations, revealing a superlinear dependence of the impulse on the radius of the bubble. The collapse affects the structure of a nearby immobilized membrane, and leads to partial membrane invagination and increased water permeation. The results of the present study are envisioned to help optimize the use of ultrasound, notably for the delivery of drugs.

  1. Morphing continuum theory for turbulence: Theory, computation, and visualization.

    PubMed

    Chen, James

    2017-10-01

    A high order morphing continuum theory (MCT) is introduced to model highly compressible turbulence. The theory is formulated under the rigorous framework of rational continuum mechanics. A set of linear constitutive equations and balance laws are deduced and presented from the Coleman-Noll procedure and Onsager's reciprocal relations. The governing equations are then arranged in conservation form and solved through the finite volume method with a second-order Lax-Friedrichs scheme for shock preservation. A numerical example of transonic flow over a three-dimensional bump is presented using MCT and the finite volume method. The comparison shows that MCT-based direct numerical simulation (DNS) provides a better prediction than Navier-Stokes (NS)-based DNS with less than 10% of the mesh number when compared with experiments. A MCT-based and frame-indifferent Q criterion is also derived to show the coherent eddy structure of the downstream turbulence in the numerical example. It should be emphasized that unlike the NS-based Q criterion, the MCT-based Q criterion is objective without the limitation of Galilean invariance.

  2. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  3. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  4. Local and global approaches to the problem of Poincaré recurrences. Applications in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Anishchenko, V. S.; Boev, Ya. I.; Semenova, N. I.; Strelkova, G. I.

    2015-07-01

    We review rigorous and numerical results on the statistics of Poincaré recurrences which are related to the modern development of the Poincaré recurrence problem. We analyze and describe the rigorous results which are achieved both in the classical (local) approach and in the recently developed global approach. These results are illustrated by numerical simulation data for simple chaotic and ergodic systems. It is shown that the basic theoretical laws can be applied to noisy systems if the probability measure is ergodic and stationary. Poincaré recurrences are studied numerically in nonautonomous systems. Statistical characteristics of recurrences are analyzed in the framework of the global approach for the cases of positive and zero topological entropy. We show that for the positive entropy, there is a relationship between the Afraimovich-Pesin dimension, Lyapunov exponents and the Kolmogorov-Sinai entropy either without and in the presence of external noise. The case of zero topological entropy is exemplified by numerical results for the Poincare recurrence statistics in the circle map. We show and prove that the dependence of minimal recurrence times on the return region size demonstrates universal properties for the golden and the silver ratio. The behavior of Poincaré recurrences is analyzed at the critical point of Feigenbaum attractor birth. We explore Poincaré recurrences for an ergodic set which is generated in the stroboscopic section of a nonautonomous oscillator and is similar to a circle shift. Based on the obtained results we show how the Poincaré recurrence statistics can be applied for solving a number of nonlinear dynamics issues. We propose and illustrate alternative methods for diagnosing effects of external and mutual synchronization of chaotic systems in the context of the local and global approaches. The properties of the recurrence time probability density can be used to detect the stochastic resonance phenomenon. We also discuss how the fractal dimension of chaotic attractors can be estimated using the Poincaré recurrence statistics.

  5. Van Driest transformation and compressible wall-bounded flows

    NASA Technical Reports Server (NTRS)

    Huang, P. G.; Coleman, G. N.

    1994-01-01

    The transformation validity question utilizing resulting data from direct numerical simulations (DNS) of supersonic, isothermal cold wall channel flow was investigated. The DNS results stood for a wide scope of parameter and were suitable for the purpose of examining the generality of Van Driest transformation. The Van Driest law of the wall can be obtained from the inner-layer similarity arguments. It was demonstrated that the Van Driest transformation cannot be incorporated to collapse the sublayer and log-layer velocity profiles simultaneously. Velocity and temperature predictions according to the preceding composite mixing-length model were presented. Despite satisfactory congruity with the DNS data, the model must be perceived as an engineering guide and not as a rigorous analysis.

  6. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  7. A Coupled Multiphysics Approach for Simulating Induced Seismicity, Ground Acceleration and Structural Damage

    NASA Astrophysics Data System (ADS)

    Podgorney, Robert; Coleman, Justin; Wilkins, Amdrew; Huang, Hai; Veeraraghavan, Swetha; Xia, Yidong; Permann, Cody

    2017-04-01

    Numerical modeling has played an important role in understanding the behavior of coupled subsurface thermal-hydro-mechanical (THM) processes associated with a number of energy and environmental applications since as early as the 1970s. While the ability to rigorously describe all key tightly coupled controlling physics still remains a challenge, there have been significant advances in recent decades. These advances are related primarily to the exponential growth of computational power, the development of more accurate equations of state, improvements in the ability to represent heterogeneity and reservoir geometry, and more robust nonlinear solution schemes. The work described in this paper documents the development and linkage of several fully-coupled and fully-implicit modeling tools. These tools simulate: (1) the dynamics of fluid flow, heat transport, and quasi-static rock mechanics; (2) seismic wave propagation from the sources of energy release through heterogeneous material; and (3) the soil-structural damage resulting from ground acceleration. These tools are developed in Idaho National Laboratory's parallel Multiphysics Object Oriented Simulation Environment, and are integrated together using a global implicit approach. The governing equations are presented, the numerical approach for simultaneously solving and coupling the three coupling physics tools is discussed, and the data input and output methodology is outlined. An example is presented to demonstrate the capabilities of the coupled multiphysics approach. The example involves simulating a system conceptually similar to the geothermal development in Basel Switzerland, and the resultant induced seismicity, ground motion and structural damage is predicted.

  8. Vector scattering analysis of TPF coronagraph pupil masks

    NASA Astrophysics Data System (ADS)

    Ceperley, Daniel P.; Neureuther, Andrew R.; Lieber, Michael D.; Kasdin, N. Jeremy; Shih, Ta-Ming

    2004-10-01

    Rigorous finite-difference time-domain electromagnetic simulation is used to simulate the scattering from proto-typical pupil mask cross-section geometries and to quantify the differences from the normally assumed ideal on-off behavior. Shaped pupil plane masks are a promising technology for the TPF coronagraph mission. However the stringent requirements placed on the optics require that the detailed behavior of the edge-effects of these masks be examined carefully. End-to-end optical system simulation is essential and an important aspect is the polarization and cross-section dependent edge-effects which are the subject of this paper. Pupil plane masks are similar in many respects to photomasks used in the integrated circuit industry. Simulation capabilities such as the FDTD simulator, TEMPEST, developed for analyzing polarization and intensity imbalance effects in nonplanar phase-shifting photomasks, offer a leg-up in analyzing coronagraph masks. However, the accuracy in magnitude and phase required for modeling a chronograph system is extremely demanding and previously inconsequential errors may be of the same order of magnitude as the physical phenomena under study. In this paper, effects of thick masks, finite conductivity metals, and various cross-section geometries on the transmission of pupil-plane masks are illustrated. Undercutting the edge shape of Cr masks improves the effective opening width to within λ/5 of the actual opening but TE and TM polarizations require opposite compensations. The deviation from ideal is examined at the reference plane of the mask opening. Numerical errors in TEMPEST, such as numerical dispersion, perfectly matched layer reflections, and source haze are also discussed along with techniques for mitigating their impacts.

  9. Comparative ELM study between the observation by ECEI and linear/nonlinear simulation in the KSTAR plasmas

    NASA Astrophysics Data System (ADS)

    Kim, Minwoo; Park, Hyeon K.; Yun, Gunsu; Lee, Jaehyun; Lee, Jieun; Lee, Woochang; Jardin, Stephen; Xu, X. Q.; Kstar Team

    2015-11-01

    The modeling of the Edge-localized-mode (ELM) should be rigorously pursued for reliable and robust ELM control for steady-state long-pulse H-mode operation in ITER as well as DEMO. In the KSTAR discharge #7328, a linear stability of the ELMs is investigated using M3D-C1 and BOUT + + codes. This is achieved by linear simulation for the n = 8 mode structure of the ELM observed by the KSTAR electron cyclotron emission imaging (ECEI) systems. In the process of analysis, variations due to the plasma equilibrium profiles and transport coefficients on the ELM growth rate are investigated and simulation results with the two codes are compared. The numerical simulations are extended to nonlinear phase of the ELM dynamics, which includes saturation and crash of the modes. Preliminary results of the nonlinear simulations are compared with the measured images especially from the saturation to the crash. This work is supported by NRF of Korea under contract no. NRF-2014M1A7A1A03029865, US DoE by LLNL under contract DE-AC52-07NA27344 and US DoE by PPPL under contract DE-AC02-09CH11466.

  10. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    NASA Astrophysics Data System (ADS)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  11. 3D Staggered-Grid Finite-Difference Simulation of Acoustic Waves in Turbulent Moving Media

    NASA Astrophysics Data System (ADS)

    Symons, N. P.; Aldridge, D. F.; Marlin, D.; Wilson, D. K.; Sullivan, P.; Ostashev, V.

    2003-12-01

    Acoustic wave propagation in a three-dimensional heterogeneous moving atmosphere is accurately simulated with a numerical algorithm recently developed under the DOD Common High Performance Computing Software Support Initiative (CHSSI). Sound waves within such a dynamic environment are mathematically described by a set of four, coupled, first-order partial differential equations governing small-amplitude fluctuations in pressure and particle velocity. The system is rigorously derived from fundamental principles of continuum mechanics, ideal-fluid constitutive relations, and reasonable assumptions that the ambient atmospheric motion is adiabatic and divergence-free. An explicit, time-domain, finite-difference (FD) numerical scheme is used to solve the system for both pressure and particle velocity wavefields. The atmosphere is characterized by 3D gridded models of sound speed, mass density, and the three components of the wind velocity vector. Dependent variables are stored on staggered spatial and temporal grids, and centered FD operators possess 2nd-order and 4th-order space/time accuracy. Accurate sound wave simulation is achieved provided grid intervals are chosen appropriately. The gridding must be fine enough to reduce numerical dispersion artifacts to an acceptable level and maintain stability. The algorithm is designed to execute on parallel computational platforms by utilizing a spatial domain-decomposition strategy. Currently, the algorithm has been validated on four different computational platforms, and parallel scalability of approximately 85% has been demonstrated. Comparisons with analytic solutions for uniform and vertically stratified wind models indicate that the FD algorithm generates accurate results with either a vanishing pressure or vanishing vertical-particle velocity boundary condition. Simulations are performed using a kinematic turbulence wind profile developed with the quasi-wavelet method. In addition, preliminary results are presented using high-resolution 3D dynamic turbulent flowfields generated by a large-eddy simulation model of a stably stratified planetary boundary layer. Sandia National Laboratories is a operated by Sandia Corporation, a Lockheed Martin Company, for the USDOE under contract 94-AL85000.

  12. Estimation of the breaking of rigor mortis by myotonometry.

    PubMed

    Vain, A; Kauppila, R; Vuori, E

    1996-05-31

    Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.

  13. Simulations and model of the nonlinear Richtmyer–Meshkov instability

    DOE PAGES

    Dimonte, Guy; Ramaprabhu, P.

    2010-01-21

    The nonlinear evolution of the Richtmyer-Meshkov (RM) instability is investigated using numerical simulations with the FLASH code in two-dimensions (2D). The purpose of the simulations is to develop an empiricial nonlinear model of the RM instability that is applicable to inertial confinement fusion (ICF) and ejecta formation, namely, at large Atwood number A and scaled initial amplitude kh o (k ≡ wavenumber) of the perturbation. The FLASH code is first validated with a variety of RM experiments that evolve well into the nonlinear regime. They reveal that bubbles stagnate when they grow by an increment of 2/k and that spikesmore » accelerate for A > 0.5 due to higher harmonics that focus them. These results are then compared with a variety of nonlinear models that are based on potential flow. We find that the models agree with simulations for moderate values of A < 0.9 and kh o< 1, but not for the larger values that characterize ICF and ejecta formation. We thus develop a new nonlinear empirical model that captures the simulation results consistent with potential flow for a broader range of A and kh o. Our hope is that such empirical models concisely capture the RM simulations and inspire more rigorous solutions.« less

  14. Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.

    2017-04-01

    To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.

  15. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  16. Core-collapse supernovae as supercomputing science: A status report toward six-dimensional simulations with exact Boltzmann neutrino transport in full general relativity

    NASA Astrophysics Data System (ADS)

    Kotake, Kei; Sumiyoshi, Kohsuke; Yamada, Shoichi; Takiwaki, Tomoya; Kuroda, Takami; Suwa, Yudai; Nagakura, Hiroki

    2012-08-01

    This is a status report on our endeavor to reveal the mechanism of core-collapse supernovae (CCSNe) by large-scale numerical simulations. Multi-dimensionality of the supernova engine, general relativistic magnetohydrodynamics, energy and lepton number transport by neutrinos emitted from the forming neutron star, as well as nuclear interactions there, are all believed to play crucial roles in repelling infalling matter and producing energetic explosions. These ingredients are non-linearly coupled with one another in the dynamics of core collapse, bounce, and shock expansion. Serious quantitative studies of CCSNe hence make extensive numerical computations mandatory. Since neutrinos are neither in thermal nor in chemical equilibrium in general, their distributions in the phase space should be computed. This is a six-dimensional (6D) neutrino transport problem and quite a challenge, even for those with access to the most advanced numerical resources such as the "K computer". To tackle this problem, we have embarked on efforts on multiple fronts. In particular, we report in this paper our recent progresses in the treatment of multidimensional (multi-D) radiation hydrodynamics. We are currently proceeding on two different paths to the ultimate goal. In one approach, we employ an approximate but highly efficient scheme for neutrino transport and treat 3D hydrodynamics and/or general relativity rigorously; some neutrino-driven explosions will be presented and quantitative comparisons will be made between 2D and 3D models. In the second approach, on the other hand, exact, but so far Newtonian, Boltzmann equations are solved in two and three spatial dimensions; we will show some example test simulations. We will also address the perspectives of exascale computations on the next generation supercomputers.

  17. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  18. Numerical investigation of exact coherent structures in turbulent small-aspect-ratio Taylor-Couette flow

    NASA Astrophysics Data System (ADS)

    Krygier, Michael; Crowley, Christopher J.; Schatz, Michael F.; Grigoriev, Roman O.

    2017-11-01

    As suggested by recent theoretical and experimental studies, fluid turbulence can be described as a walk between neighborhoods of unstable nonchaotic solutions of the Navier-Stokes equation known as exact coherent structures (ECS). Finding ECS in an experimentally-accessible setting is the first step toward rigorous testing of the dynamical role of ECS in 3D turbulence. We found several ECS (both relative periodic orbits and relative equilibria) in a weakly turbulent regime of small-aspect-ratio Taylor-Couette flow with counter-rotating cylinders. This talk will discuss how the geometry of these solutions guides the evolution of turbulent flow in the simulations. This work is supported by the Army Research Office (Contract # W911NF-15-1-0471).

  19. Bifurcation and chaos analysis of a nonlinear electromechanical coupling relative rotation system

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Zhao, Shuang-Shuang; Sun, Bao-Ping; Zhang, Wen-Ming

    2014-09-01

    Hopf bifurcation and chaos of a nonlinear electromechanical coupling relative rotation system are studied in this paper. Considering the energy in air-gap field of AC motor, the dynamical equation of nonlinear electromechanical coupling relative rotation system is deduced by using the dissipation Lagrange equation. Choosing the electromagnetic stiffness as a bifurcation parameter, the necessary and sufficient conditions of Hopf bifurcation are given, and the bifurcation characteristics are studied. The mechanism and conditions of system parameters for chaotic motions are investigated rigorously based on the Silnikov method, and the homoclinic orbit is found by using the undetermined coefficient method. Therefore, Smale horseshoe chaos occurs when electromagnetic stiffness changes. Numerical simulations are also given, which confirm the analytical results.

  20. Identifying partial topology of complex dynamical networks via a pinning mechanism

    NASA Astrophysics Data System (ADS)

    Zhu, Shuaibing; Zhou, Jin; Lu, Jun-an

    2018-04-01

    In this paper, we study the problem of identifying the partial topology of complex dynamical networks via a pinning mechanism. By using the network synchronization theory and the adaptive feedback controlling method, we propose a method which can greatly reduce the number of nodes and observers in the response network. Particularly, this method can also identify the whole topology of complex networks. A theorem is established rigorously, from which some corollaries are also derived in order to make our method more cost-effective. Several numerical examples are provided to verify the effectiveness of the proposed method. In the simulation, an approach is also given to avoid possible identification failure caused by inner synchronization of the drive network.

  1. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  2. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  3. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  4. A simple model for indentation creep

    NASA Astrophysics Data System (ADS)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  5. Thermostatted delta f

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krommes, J.A.

    2000-01-18

    The delta f simulation method is revisited. Statistical coarse-graining is used to rigorously derive the equation for the fluctuation delta f in the particle distribution. It is argued that completely collisionless simulation is incompatible with the achievement of true statistically steady states with nonzero turbulent fluxes because the variance of the particle weights w grows with time. To ensure such steady states, it is shown that for dynamically collisionless situations a generalized thermostat or W-stat may be used in lieu of a full collision operator to absorb the flow of entropy to unresolved fine scales in velocity space. The simplestmore » W-stat can be implemented as a self-consistently determined, time-dependent damping applied to w. A precise kinematic analogy to thermostatted nonequilibrium molecular dynamics (NEMD) is pointed out, and the justification of W-stats for simulations of turbulence is discussed. An extrapolation procedure is proposed such that the long-time, steady-state, collisionless flux can be deduced from several short W-statted runs with large effective collisionality, and a numerical demonstration is given.« less

  6. Performance assessment of Large Eddy Simulation (LES) for modeling dispersion in an urban street canyon with tree planting

    NASA Astrophysics Data System (ADS)

    Moonen, P.; Gromke, C.; Dorer, V.

    2013-08-01

    The potential of a Large Eddy Simulation (LES) model to reliably predict near-field pollutant dispersion is assessed. To that extent, detailed time-resolved numerical simulations of coupled flow and dispersion are conducted for a street canyon with tree planting. Different crown porosities are considered. The model performance is assessed in several steps, ranging from a qualitative comparison to measured concentrations, over statistical data analysis by means of scatter plots and box plots, up to the calculation of objective validation metrics. The extensive validation effort highlights and quantifies notable features and shortcomings of the model, which would otherwise remain unnoticed. The model performance is found to be spatially non-uniform. Closer agreement with measurement data is achieved near the canyon ends than for the central part of the canyon, and typical model acceptance criteria are satisfied more easily for the leeward than for the windward canyon wall. This demonstrates the need for rigorous model evaluation. Only quality-assured models can be used with confidence to support assessment, planning and implementation of pollutant mitigation strategies.

  7. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  8. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    NASA Astrophysics Data System (ADS)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  9. Spacetime dynamics of a Higgs vacuum instability during inflation

    DOE PAGES

    East, William E.; Kearney, John; Shakya, Bibhushan; ...

    2017-01-31

    A remarkable prediction of the Standard Model is that, in the absence of corrections lifting the energy density, the Higgs potential becomes negative at large field values. If the Higgs field samples this part of the potential during inflation, the negative energy density may locally destabilize the spacetime. Here, we use numerical simulations of the Einstein equations to study the evolution of inflation-induced Higgs fluctuations as they grow towards the true (negative-energy) minimum. Our simulations show that forming a single patch of true vacuum in our past light cone during inflation is incompatible with the existence of our Universe; themore » boundary of the true vacuum region grows outward in a causally disconnected manner from the crunching interior, which forms a black hole. We also find that these black hole horizons may be arbitrarily elongated—even forming black strings—in violation of the hoop conjecture. Furthermore, by extending the numerical solution of the Fokker-Planck equation to the exponentially suppressed tails of the field distribution at large field values, we derive a rigorous correlation between a future measurement of the tensor-to-scalar ratio and the scale at which the Higgs potential must receive stabilizing corrections in order for the Universe to have survived inflation until today.« less

  10. Analysis of local warm forming of high strength steel using near infrared ray energy

    NASA Astrophysics Data System (ADS)

    Yang, W. H.; Lee, K.; Lee, E. H.; Yang, D. Y.

    2013-12-01

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infrared ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Tianfeng

    The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily basedmore » on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.« less

  12. Further Development of a New, Flux-Conserving Newton Scheme for the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    1996-01-01

    This paper is one of a series of papers describing the development of a new numerical approach for solving the steady Navier-Stokes equations. The key features in the current development are (1) the discrete representation of the dependent variables by way of high order polynomial expansions, (2) the retention of all derivatives in the expansions as unknowns to be explicitly solved for, (3) the automatic balancing of fluxes at cell interfaces, and (4) the discrete simulation of both the integral and differential forms of the governing equations. The main purpose of this paper is, first, to provide a systematic and rigorous derivation of the conditions that are used to simulate the differential form of the Navier-Stokes equations, and second, to extend our previously-presented internal flow scheme to external flows and nonuniform grids. Numerical results are presented for high Reynolds number flow (Re = 100,000) around a finite flat plate, and detailed comparisons are made with the Blasius flat plate solution and Goldstein wake solution. It is shown that the error in the streamwise velocity decreases like r(sup alpha)(Delta)y(exp 2), where alpha approx. 0.25 and r = delta(y)/delta(x) is the grid aspect ratio.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, W. H., E-mail: whyang21@hyundai.com; Lee, K., E-mail: klee@deform.co.kr; Lee, E. H., E-mail: mtgs2@kaist.ac.kr, E-mail: dyyang@kaist.ac.kr

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infraredmore » ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.« less

  14. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  15. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  16. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Comparison between PVI2D and Abreu–Johnson’s Model for Petroleum Vapor Intrusion Assessment

    PubMed Central

    Yao, Yijun; Wang, Yue; Verginelli, Iason; Suuberg, Eric M.; Ye, Jianfeng

    2018-01-01

    Recently, we have developed a two-dimensional analytical petroleum vapor intrusion model, PVI2D (petroleum vapor intrusion, two-dimensional), which can help users to easily visualize soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, reaction rate constant, soil characteristics, and building features. In this study, we made a full comparison of the results returned by PVI2D and those obtained using Abreu and Johnson’s three-dimensional numerical model (AJM). These comparisons, examined as a function of the source strength, source depth, and reaction rate constant, show that PVI2D can provide similar soil gas concentration profiles and source-to-indoor air attenuation factors (within one order of magnitude difference) as those by the AJM. The differences between the two models can be ascribed to some simplifying assumptions used in PVI2D and to some numerical limitations of the AJM in simulating strictly piecewise aerobic biodegradation and no-flux boundary conditions. Overall, the obtained results show that for cases involving homogenous source and soil, PVI2D can represent a valid alternative to more rigorous three-dimensional numerical models. PMID:29398981

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less

  19. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  20. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  1. Application of the finite-element method and the eigenmode expansion method to investigate the periodic and spectral characteristic of discrete phase-shift fiber Bragg grating

    NASA Astrophysics Data System (ADS)

    He, Yue-Jing; Hung, Wei-Chih; Syu, Cheng-Jyun

    2017-12-01

    The finite-element method (FEM) and eigenmode expansion method (EEM) were adopted to analyze the guided modes and spectrum of phase-shift fiber Bragg grating at five phase-shift degrees (including zero, 1/4π, 1/2π, 3/4π, and π). In previous studies on optical fiber grating, conventional coupled-mode theory was crucial. This theory contains abstruse knowledge about physics and complex computational processes, and thus is challenging for users. Therefore, a numerical simulation method was coupled with a simple and rigorous design procedure to help beginners and users to overcome difficulty in entering the field; in addition, graphical simulation results were presented. To reduce the difference between the simulated context and the actual context, a perfectly matched layer and perfectly reflecting boundary were added to the FEM and the EEM. When the FEM was used for grid cutting, the object meshing method and the boundary meshing method proposed in this study were used to effectively enhance computational accuracy and substantially reduce the time required for simulation. In summary, users can use the simulation results in this study to easily and rapidly design an optical fiber communication system and optical sensors with spectral characteristics.

  2. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less

  3. Parameter-free driven Liouville-von Neumann approach for time-dependent electronic transport simulations in open quantum systems

    DOE PAGES

    Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei; ...

    2017-03-02

    A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less

  4. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsoulakis, Markos

    2014-08-09

    Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhmedzhanov, I M; Kibalov, D S; Smirnov, V K

    We report a detailed numerical simulation of the reflection of visible light from a sub-wavelength grating with a rectangular profile on the silicon surface. Simulation is carried out by the effective refractive index method and rigorous coupled-wave analysis. The dependences of the reflectance on the grating depth, fill factor and angle of incidence for TE and TM polarisations are obtained and analysed. Good agreement between the results obtained by the two methods for grating periods of ∼100 nm is found. The possibility of reducing the polarised light reflectance to about 1% by adjusting the depth and the grating fill factormore » is demonstrated. The characteristics of the Brewster effect manifestation (pseudo-Brewster angle) in the system under study are considered. The possibility of the pseudo-Brewster angle existence and its absence for both polarisations of the incident light is shown as a function of the parameters of a rectangular nanostructure on the surface. (laser applications and other topics in quantum electronics)« less

  6. Motion of Deformable Drops Through Porous Media

    NASA Astrophysics Data System (ADS)

    Zinchenko, Alexander Z.; Davis, Robert H.

    2017-01-01

    This review describes recent progress in the fundamental understanding of deformable drop motion through porous media with well-defined microstructures, through rigorous first-principles hydrodynamical simulations and experiments. Tight squeezing conditions, when the drops are much larger than the pore throats, are particularly challenging numerically, as the drops nearly coat the porous material skeleton with small surface clearance, requiring very high surface resolution in the algorithms. Small-scale prototype problems for flow-induced drop motion through round capillaries and three-dimensional (3D) constrictions between solid particles, and for gravity-induced squeezing through round orifices and 3D constrictions, show how forcing above critical conditions is needed to overcome trapping. Scaling laws for the squeezing time are suggested. Large-scale multidrop/multiparticle simulations for emulsion flow through a random granular material with multiple drop breakup show that the drop phase generally moves faster than the carrier fluid; both phase velocities equilibrate much faster to the statistical steady state than does the drop-size distribution.

  7. Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.

    PubMed

    Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli

    2017-03-01

    In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.

  8. Parameter-free driven Liouville-von Neumann approach for time-dependent electronic transport simulations in open quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei

    A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less

  9. Rigorous electromagnetic simulation applied to alignment systems

    NASA Astrophysics Data System (ADS)

    Deng, Yunfei; Pistor, Thomas V.; Neureuther, Andrew R.

    2001-09-01

    Rigorous electromagnetic simulation with TEMPEST is used to provide benchmark data and understanding of key parameters in the design of topographical features of alignment marks. Periodic large silicon trenches are analyzed as a function of wavelength (530-800 nm), duty cycle, depth, slope and angle of incidence. The signals are well behaved except when the trench width becomes about 1 micrometers or smaller. Segmentation of the trenches to form 3D marks shows that a segmentation period of 2-5 wavelengths makes the diffraction in the (1,1) direction about 1/3 to 1/2 of that in the main first order (1,0). Transmission alignment marks nanoimprint lithography using the difference between the +1 and -1 reflected orders showed a sensitivity of the difference signal to misalignment of 0.7%/nm for rigorous simulation and 0.5%/nm for simple ray-tracing. The sensitivity to a slanted substrate indentation was 10 nm off-set per degree of tilt from horizontal.

  10. Solitary water wave interactions

    NASA Astrophysics Data System (ADS)

    Craig, W.; Guyenne, P.; Hammack, J.; Henderson, D.; Sulem, C.

    2006-05-01

    This article concerns the pairwise nonlinear interaction of solitary waves in the free surface of a body of water lying over a horizontal bottom. Unlike solitary waves in many completely integrable model systems, solitary waves for the full Euler equations do not collide elastically; after interactions, there is a nonzero residual wave that trails the post-collision solitary waves. In this report on new numerical and experimental studies of such solitary wave interactions, we verify that this is the case, both in head-on collisions (the counterpropagating case) and overtaking collisions (the copropagating case), quantifying the degree to which interactions are inelastic. In the situation in which two identical solitary waves undergo a head-on collision, we compare the asymptotic predictions of Su and Mirie [J. Fluid Mech. 98, 509 (1980)] and Byatt-Smith [J. Fluid Mech. 49, 625 (1971)], the wavetank experiments of Maxworthy [J. Fluid Mech. 76, 177 (1976)], and the numerical results of Cooker, Weidman, and Bale [J. Fluid Mech. 342, 141 (1997)] with independent numerical simulations, in which we quantify the phase change, the run-up, and the form of the residual wave and its Fourier signature in both small- and large-amplitude interactions. This updates the prior numerical observations of inelastic interactions in Fenton and Rienecker [J. Fluid Mech. 118, 411 (1982)]. In the case of two nonidentical solitary waves, our precision wavetank experiments are compared with numerical simulations, again observing the run-up, phase lag, and generation of a residual from the interaction. Considering overtaking solitary wave interactions, we compare our experimental observations, numerical simulations, and the asymptotic predictions of Zou and Su [Phys. Fluids 29, 2113 (1986)], and again we quantify the inelastic residual after collisions in the simulations. Geometrically, our numerical simulations of overtaking interactions fit into the three categories of Korteweg-deVries two-soliton solutions defined in Lax [Commun. Pure Appl. Math. 21, 467 (1968)], with, however, a modification in the parameter regime. In all cases we have considered, collisions are seen to be inelastic, although the degree to which interactions depart from elastic is very small. Finally, we give several theoretical results: (i) a relationship between the change in amplitude of solitary waves due to a pairwise collision and the energy carried away from the interaction by the residual component, and (ii) a rigorous estimate of the size of the residual component of pairwise solitary wave collisions. This estimate is consistent with the analytic results of Schneider and Wayne [Commun. Pure Appl. Math. 53, 1475 (2000)], Wright [SIAM J. Math. Anal. 37, 1161 (2005)], and Bona, Colin, and Lannes [Arch. Rat. Mech. Anal. 178, 373 (2005)]. However, in light of our numerical data, both (i) and (ii) indicate a need to reevaluate the asymptotic results in Su and Mirie [J. Fluid Mech. 98, 509 (1980)] and Zou and Su [Phys. Fluids 29, 2113 (1986)].

  11. Design of transmission-type phase holograms for a compact radar-cross-section measurement range at 650 GHz.

    PubMed

    Noponen, Eero; Tamminen, Aleksi; Vaaja, Matti

    2007-07-10

    A design formalism is presented for transmission-type phase holograms for use in a submillimeter-wave compact radar-cross-section (RCS) measurement range. The design method is based on rigorous electromagnetic grating theory combined with conventional hologram synthesis. Hologram structures consisting of a curved groove pattern on a 320 mmx280 mm Teflon plate are designed to transform an incoming spherical wave at 650 GHz into an output wave generating a 100 mm diameter planar field region (quiet zone) at a distance of 1 m. The reconstructed quiet-zone field is evaluated by a numerical simulation method. The uniformity of the quiet-zone field is further improved by reoptimizing the goal field. Measurement results are given for a test hologram fabricated on Teflon.

  12. Structure-induced asymmetry between counterpropagating modes and the reciprocity principle in whistle-geometry ring lasers

    NASA Astrophysics Data System (ADS)

    Osiński, Marek; Kalagara, Hemashilpa; Lee, Hosuk; Smolyakov, Gennady A.

    2017-08-01

    Greatly enhanced high-speed modulation performance has been recently predicted in numerical calculations for a novel injection-locking scheme involving a distributed Bragg reflector master laser monolithically integrated with a unidirectional whistle-geometry semiconductor microring laser. Operation of these devices relies on the assumption of large difference between modal losses experienced by counterpropagating modes. In this work, we confirm the unidirectionality of the whistle-geometry configuration through rigorous three-dimensional finite-difference timedomain (FDTD) simulation by showing a strong asymmetry in photon lifetimes between the two counterpropagating modes. We also show that similar asymmetry occurs in three-port couplers, whose structure resembles the coupling section of whistle-geometry lasers. We explain why these results do not violate the Helmholtz reciprocity principle.

  13. On the characterization of the heterogeneous mechanical response of human brain tissue.

    PubMed

    Forte, Antonio E; Gentleman, Stephen M; Dini, Daniele

    2017-06-01

    The mechanical characterization of brain tissue is a complex task that scientists have tried to accomplish for over 50 years. The results in the literature often differ by orders of magnitude because of the lack of a standard testing protocol. Different testing conditions (including humidity, temperature, strain rate), the methodology adopted, and the variety of the species analysed are all potential sources of discrepancies in the measurements. In this work, we present a rigorous experimental investigation on the mechanical properties of human brain, covering both grey and white matter. The influence of testing conditions is also shown and thoroughly discussed. The material characterization performed is finally adopted to provide inputs to a mathematical formulation suitable for numerical simulations of brain deformation during surgical procedures.

  14. Tsallis thermostatistics for finite systems: a Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.; Moreira, Andrã© A.; Andrade, José S., Jr.; Almeida, Murilo P.

    2003-05-01

    The derivation of the Tsallis generalized canonical distribution from the traditional approach of the Gibbs microcanonical ensemble is revisited (Phys. Lett. A 193 (1994) 140). We show that finite systems whose Hamiltonians obey a generalized homogeneity relation rigorously follow the nonextensive thermostatistics of Tsallis. In the thermodynamical limit, however, our results indicate that the Boltzmann-Gibbs statistics is always recovered, regardless of the type of potential among interacting particles. This approach provides, moreover, a one-to-one correspondence between the generalized entropy and the Hamiltonian structure of a wide class of systems, revealing a possible origin for the intrinsic nonlinear features present in the Tsallis formalism that lead naturally to power-law behavior. Finally, we confirm these exact results through extensive numerical simulations of the Fermi-Pasta-Ulam chain of anharmonic oscillators.

  15. Optical simulations of organic light-emitting diodes through a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods

    NASA Astrophysics Data System (ADS)

    Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Rob; Gregory, G. Groot

    2014-09-01

    Over the last two decades there has been extensive research done to improve the design of Organic Light Emitting Diodes (OLEDs) so as to enhance light extraction efficiency, improve beam shaping, and allow color tuning through techniques such as the use of patterned substrates, photonic crystal (PCs) gratings, back reflectors, surface texture, and phosphor down-conversion. Computational simulation has been an important tool for examining these increasingly complex designs. It has provided insights for improving OLED performance as a result of its ability to explore limitations, predict solutions, and demonstrate theoretical results. Depending upon the focus of the design and scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics based techniques, such as finite-difference time-domain (FDTD) and rigorous coupled wave analysis (RCWA), or through ray optics based technique such as Monte Carlo ray-tracing. The former are typically used for modeling nanostructures on the OLED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor down-conversion. This paper presents the use of a mixed-level simulation approach which unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave based tools to characterize the nanostructured die and generate both a Bidirectional Scattering Distribution function (BSDF) and a far-field angular intensity distribution. These characteristics are then incorporated into the ray-tracing simulator to obtain the overall performance. Such mixed-level approach allows for comprehensive modeling of the optical characteristic of OLEDs and can potentially lead to more accurate performance than that from individual modeling tools alone.

  16. Image synthesis for SAR system, calibration and processor design

    NASA Technical Reports Server (NTRS)

    Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.

    1978-01-01

    The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.

  17. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  18. Numerical Analysis of the Dynamics of Nonlinear Solids and Structures

    DTIC Science & Technology

    2008-08-01

    to arrive to a new numerical scheme that exhibits rigorously the dissipative character of the so-called canonical free en - ergy characteristic of...UCLA), February 14 2006. 5. "Numerical Integration of the Nonlinear Dynamics of Elastoplastic Solids," keynote lecture , 3rd European Conference on...Computational Mechanics (ECCM 3), Lisbon, Portugal, June 5-9 2006. 6. "Energy-Momentum Schemes for Finite Strain Plasticity," keynote lecture , 7th

  19. Bootstrapping the (A1, A2) Argyres-Douglas theory

    NASA Astrophysics Data System (ADS)

    Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro

    2018-03-01

    We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.

  20. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  1. Parallel numerical modeling of hybrid-dimensional compositional non-isothermal Darcy flows in fractured porous media

    NASA Astrophysics Data System (ADS)

    Xing, F.; Masson, R.; Lopez, S.

    2017-09-01

    This paper introduces a new discrete fracture model accounting for non-isothermal compositional multiphase Darcy flows and complex networks of fractures with intersecting, immersed and non-immersed fractures. The so called hybrid-dimensional model using a 2D model in the fractures coupled with a 3D model in the matrix is first derived rigorously starting from the equi-dimensional matrix fracture model. Then, it is discretized using a fully implicit time integration combined with the Vertex Approximate Gradient (VAG) finite volume scheme which is adapted to polyhedral meshes and anisotropic heterogeneous media. The fully coupled systems are assembled and solved in parallel using the Single Program Multiple Data (SPMD) paradigm with one layer of ghost cells. This strategy allows for a local assembly of the discrete systems. An efficient preconditioner is implemented to solve the linear systems at each time step and each Newton type iteration of the simulation. The numerical efficiency of our approach is assessed on different meshes, fracture networks, and physical settings in terms of parallel scalability, nonlinear convergence and linear convergence.

  2. Probabilistic Space Weather Forecasting: a Bayesian Perspective

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Chandorkar, M.; Borovsky, J.; Care', A.

    2017-12-01

    Most of the Space Weather forecasts, both at operational and research level, are not probabilistic in nature. Unfortunately, a prediction that does not provide a confidence level is not very useful in a decision-making scenario. Nowadays, forecast models range from purely data-driven, machine learning algorithms, to physics-based approximation of first-principle equations (and everything that sits in between). Uncertainties pervade all such models, at every level: from the raw data to finite-precision implementation of numerical methods. The most rigorous way of quantifying the propagation of uncertainties is by embracing a Bayesian probabilistic approach. One of the simplest and most robust machine learning technique in the Bayesian framework is Gaussian Process regression and classification. Here, we present the application of Gaussian Processes to the problems of the DST geomagnetic index forecast, the solar wind type classification, and the estimation of diffusion parameters in radiation belt modeling. In each of these very diverse problems, the GP approach rigorously provide forecasts in the form of predictive distributions. In turn, these distributions can be used as input for ensemble simulations in order to quantify the amplification of uncertainties. We show that we have achieved excellent results in all of the standard metrics to evaluate our models, with very modest computational cost.

  3. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  4. Velocity field calculation for non-orthogonal numerical grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    2015-03-01

    Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation,more » and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non-orthogonal grid, Darcy velocity components are rigorously derived in this study from normal fluxes to cell faces, which are assumed to be provided by or readily computed from porous-medium simulation code output. The normal fluxes are presumed to satisfy mass balances for every computational cell, and if so, the derived velocity fields are consistent with these mass balances. Derivations are provided for general two-dimensional quadrilateral and three-dimensional hexagonal systems, and for the commonly encountered special cases of perfectly vertical side faces in 2D and 3D and a rectangular footprint in 3D.« less

  5. Thermostatted {delta}f

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krommes, J.A.

    1999-05-01

    The {delta}f simulation method is revisited. Statistical coarse graining is used to rigorously derive the equation for the fluctuation {delta}f in the particle distribution. It is argued that completely collisionless simulation is incompatible with the achievement of true statistically steady states with nonzero turbulent fluxes because the variance {ital W} of the particle weights {ital w} grows with time. To ensure such steady states, it is shown that for dynamically collisionless situations a generalized thermostat or {open_quotes}{ital W} stat{close_quotes} may be used in lieu of a full collision operator to absorb the flow of entropy to unresolved fine scales inmore » velocity space. The simplest {ital W} stat can be implemented as a self-consistently determined, time-dependent damping applied to {ital w}. A precise kinematic analogy to thermostatted nonequilibrium molecular dynamics is pointed out, and the justification of {ital W} stats for simulations of turbulence is discussed. An extrapolation procedure is proposed such that the long-time, steady-state, collisionless flux can be deduced from several short {ital W}-statted runs with large effective collisionality, and a numerical demonstration is given. {copyright} {ital 1999 American Institute of Physics.}« less

  6. Cause and Cure - Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.

  7. Advanced numerical technique for analysis of surface and bulk acoustic waves in resonators using periodic metal gratings

    NASA Astrophysics Data System (ADS)

    Naumenko, Natalya F.

    2014-09-01

    A numerical technique characterized by a unified approach for the analysis of different types of acoustic waves utilized in resonators in which a periodic metal grating is used for excitation and reflection of such waves is described. The combination of the Finite Element Method analysis of the electrode domain with the Spectral Domain Analysis (SDA) applied to the adjacent upper and lower semi-infinite regions, which may be multilayered and include air as a special case of a dielectric material, enables rigorous simulation of the admittance in resonators using surface acoustic waves, Love waves, plate modes including Lamb waves, Stonely waves, and other waves propagating along the interface between two media, and waves with transient structure between the mentioned types. The matrix formalism with improved convergence incorporated into SDA provides fast and robust simulation for multilayered structures with arbitrary thickness of each layer. The described technique is illustrated by a few examples of its application to various combinations of LiNbO3, isotropic silicon dioxide and silicon with a periodic array of Cu electrodes. The wave characteristics extracted from the admittance functions change continuously with the variation of the film and plate thicknesses over wide ranges, even when the wave nature changes. The transformation of the wave nature with the variation of the layer thicknesses is illustrated by diagrams and contour plots of the displacements calculated at resonant frequencies.

  8. Reinventing the High School Government Course: Rigor, Simulations, and Learning from Text

    ERIC Educational Resources Information Center

    Parker, Walter C.; Lo, Jane C.

    2016-01-01

    The high school government course is arguably the main site of formal civic education in the country today. This article presents the curriculum that resulted from a multiyear study aimed at improving the course. The pedagogic model, called "Knowledge in Action," centers on a rigorous form of project-based learning where the projects are…

  9. Stochastic Ocean Predictions with Dynamically-Orthogonal Primitive Equations

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P., Jr.; Lermusiaux, P. F. J.

    2017-12-01

    The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex and intermittent with unstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. For efficient and rigorous quantification and prediction of these uncertainities, the stochastic Dynamically Orthogonal (DO) PDEs for a primitive equation ocean modeling system with a nonlinear free-surface are derived and numerical schemes for their space-time integration are obtained. Detailed numerical studies with idealized-to-realistic regional ocean dynamics are completed. These include consistency checks for the numerical schemes and comparisons with ensemble realizations. As an illustrative example, we simulate the 4-d multiscale uncertainty in the Middle Atlantic/New York Bight region during the months of Jan to Mar 2017. To provide intitial conditions for the uncertainty subspace, uncertainties in the region were objectively analyzed using historical data. The DO primitive equations were subsequently integrated in space and time. The probability distribution function (pdf) of the ocean fields is compared to in-situ, remote sensing, and opportunity data collected during the coincident POSYDON experiment. Results show that our probabilistic predictions had skill and are 3- to 4- orders of magnitude faster than classic ensemble schemes.

  10. Forward modelling of global gravity fields with 3D density structures and an application to the high-resolution ( 2 km) gravity fields of the Moon

    NASA Astrophysics Data System (ADS)

    Šprlák, M.; Han, S.-C.; Featherstone, W. E.

    2017-12-01

    Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.

  11. The MINERVA Software Development Process

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.

    2017-01-01

    This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.

  12. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations

    PubMed Central

    Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364

  13. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.

    PubMed

    Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.

  14. Analytical calculation on the determination of steep side wall angles from far field measurements

    NASA Astrophysics Data System (ADS)

    Cisotto, Luca; Pereira, Silvania F.; Urbach, H. Paul

    2018-06-01

    In the semiconductor industry, the performance and capabilities of the lithographic process are evaluated by measuring specific structures. These structures are often gratings of which the shape is described by a few parameters such as period, middle critical dimension, height, and side wall angle (SWA). Upon direct measurement or retrieval of these parameters, the determination of the SWA suffers from considerable inaccuracies. Although the scattering effects that steep SWAs have on the illumination can be obtained with rigorous numerical simulations, analytical models constitute a very useful tool to get insights into the problem we are treating. In this paper, we develop an approach based on analytical calculations to describe the scattering of a cliff and a ridge with steep SWAs. We also propose a detection system to determine the SWAs of the structures.

  15. Reliable spacecraft rendezvous without velocity measurement

    NASA Astrophysics Data System (ADS)

    He, Shaoming; Lin, Defu

    2018-03-01

    This paper investigates the problem of finite-time velocity-free autonomous rendezvous for spacecraft in the presence of external disturbances during the terminal phase. First of all, to address the problem of lack of relative velocity measurement, a robust observer is proposed to estimate the unknown relative velocity information in a finite time. It is shown that the effect of external disturbances on the estimation precision can be suppressed to a relatively low level. With the reconstructed velocity information, a finite-time output feedback control law is then formulated to stabilize the rendezvous system. Theoretical analysis and rigorous proof show that the relative position and its rate can converge to a small compacted region in finite time. Numerical simulations are performed to evaluate the performance of the proposed approach in the presence of external disturbances and actuator faults.

  16. Robust adaptive cruise control of high speed trains.

    PubMed

    Faieghi, Mohammadreza; Jalali, Aliakbar; Mashhadi, Seyed Kamal-e-ddin Mousavi

    2014-03-01

    The cruise control problem of high speed trains in the presence of unknown parameters and external disturbances is considered. In particular a Lyapunov-based robust adaptive controller is presented to achieve asymptotic tracking and disturbance rejection. The system under consideration is nonlinear, MIMO and non-minimum phase. To deal with the limitations arising from the unstable zero-dynamics we do an output redefinition such that the zero-dynamics with respect to new outputs becomes stable. Rigorous stability analyses are presented which establish the boundedness of all the internal states and simultaneously asymptotic stability of the tracking error dynamics. The results are presented for two common configurations of high speed trains, i.e. the DD and PPD designs, based on the multi-body model and are verified by several numerical simulations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Generation of parabolic similaritons in tapered silicon photonic wires: comparison of pulse dynamics at telecom and mid-infrared wavelengths.

    PubMed

    Lavdas, Spyros; Driscoll, Jeffrey B; Jiang, Hongyi; Grote, Richard R; Osgood, Richard M; Panoiu, Nicolae C

    2013-10-01

    We study the generation of parabolic self-similar optical pulses in tapered Si photonic nanowires (Si-PhNWs) at both telecom (λ=1.55 μm) and mid-infrared (λ=2.2 μm) wavelengths. Our computational study is based on a rigorous theoretical model, which fully describes the influence of linear and nonlinear optical effects on pulse propagation in Si-PhNWs with arbitrarily varying width. Numerical simulations demonstrate that, in the normal dispersion regime, optical pulses evolve naturally into parabolic pulses upon propagation in millimeter-long tapered Si-PhNWs, with the efficiency of this pulse-reshaping process being strongly dependent on the spectral and pulse parameter regime in which the device operates, as well as the particular shape of the Si-PhNWs.

  18. Generalized plasma dispersion function: One-solve-all treatment, visualizations, and application to Landau damping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Hua-Sheng

    2013-09-15

    A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less

  19. Nanospectroscopy of thiacyanine dye molecules adsorbed on silver nanoparticle clusters

    NASA Astrophysics Data System (ADS)

    Ralević, Uroš; Isić, Goran; Anicijević, Dragana Vasić; Laban, Bojana; Bogdanović, Una; Lazović, Vladimir M.; Vodnik, Vesna; Gajić, Radoš

    2018-03-01

    The adsorption of thiacyanine dye molecules on citrate-stabilized silver nanoparticle clusters drop-cast onto freshly cleaved mica or highly oriented pyrolytic graphite surfaces is examined using colocalized surface-enhanced Raman spectroscopy and atomic force microscopy. The incidence of dye Raman signatures in photoluminescence hotspots identified around nanoparticle clusters is considered for both citrate- and borate-capped silver nanoparticles and found to be substantially lower in the former case, suggesting that the citrate anions impede the efficient dye adsorption. Rigorous numerical simulations of light scattering on random nanoparticle clusters are used for estimating the electromagnetic enhancement and elucidating the hotspot formation mechanism. The majority of the enhanced Raman signal, estimated to be more than 90%, is found to originate from the nanogaps between adjacent nanoparticles in the cluster, regardless of the cluster size and geometry.

  20. A dynamic gain equalizer based on holographic polymer dispersed liquid crystal gratings

    NASA Astrophysics Data System (ADS)

    Xin, Zhaohui; Cai, Jiguang; Shen, Guotu; Yang, Baocheng; Zheng, Jihong; Gu, Lingjuan; Zhuang, Songlin

    2006-12-01

    The dynamic gain equalizer consisting of gratings made of holographic polymer dispersed liquid crystal is explored and the structure and principle presented. The properties of the holographic polymer dispersed liquid crystal grating are analyzed in light of the rigorous coupled-wave theory. Experimental study is also conducted in which a beam of infrared laser was incident to the grating sample and an alternating current electric field applied. The electro-optical properties of the grating and the influence of the applied field were observed. The results of the experiment agree with that of the theory quite well. The design method of the dynamic gain equalizer with the help of numerical simulation is presented too. The study shows that holographic polymer dispersed liquid crystal gratings have great potential to play a role in fiber optics communication.

  1. Analytical formulation of lunar cratering asymmetries

    NASA Astrophysics Data System (ADS)

    Wang, Nan; Zhou, Ji-Lin

    2016-10-01

    Context. The cratering asymmetry of a bombarded satellite is related to both its orbit and impactors. The inner solar system impactor populations, that is, the main-belt asteroids (MBAs) and the near-Earth objects (NEOs), have dominated during the late heavy bombardment (LHB) and ever since, respectively. Aims: We formulate the lunar cratering distribution and verify the cratering asymmetries generated by the MBAs as well as the NEOs. Methods: Based on a planar model that excludes the terrestrial and lunar gravitations on the impactors and assuming the impactor encounter speed with Earth venc is higher than the lunar orbital speed vM, we rigorously integrated the lunar cratering distribution, and derived its approximation to the first order of vM/venc. Numerical simulations of lunar bombardment by the MBAs during the LHB were performed with an Earth-Moon distance aM = 20-60 Earth radii in five cases. Results: The analytical model directly proves the existence of a leading/trailing asymmetry and the absence of near/far asymmetry. The approximate form of the leading/trailing asymmetry is (1 + A1cosβ), which decreases as the apex distance β increases. The numerical simulations show evidence of a pole/equator asymmetry as well as the leading/trailing asymmetry, and the former is empirically described as (1 + A2cos2ϕ), which decreases as the latitude modulus | ϕ | increases. The amplitudes A1,2 are reliable measurements of asymmetries. Our analysis explicitly indicates the quantitative relations between cratering distribution and bombardment conditions (impactor properties and the lunar orbital status) like A1 ∝ vM/venc, resulting in a method for reproducing the bombardment conditions through measuring the asymmetry. Mutual confirmation between analytical model and numerical simulations is found in terms of the cratering distribution and its variation with aM. Estimates of A1 for crater density distributions generated by the MBAs and the NEOs are 0.101-0.159 and 0.117, respectively.

  2. Understanding the seismic wave propagation inside and around an underground cavity from a 3D numerical survey

    NASA Astrophysics Data System (ADS)

    Esterhazy, Sofi; Schneider, Felix; Perugia, Ilaria; Bokelmann, Götz

    2017-04-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explosion/weapon testing, we aim to provide a basic numerical study of the wave propagation around and inside such an underground cavity. One method to investigate the geophysical properties of an underground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as "resonance seismometry" - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and so far, there are only very few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in numerical modeling of wave propagation problems. Our numerical study includes the full elastic wave field in three dimensions. We consider the effects from an incoming plane wave as well as point source located in the surrounding of the cavity at the surface. While the former can be considered as passive source like a tele-seismic earthquake, the latter represents a man-made explosion or a viborseis as used for/in active seismic techniques. Further we want to demonstrate the specific characteristics of the scattered wave field from a P-waves and S-wave separately. For our simulations in 3D we use the discontinuous Galerkin Spectral Element Code SPEED developed by MOX (The Laboratory for Modeling and Scientific Computing, Department of Mathematics) and DICA (Department of Civil and Environmental Engineering) at the Politecnico di Milano. The computations are carried out on the Vienna Scientific Cluster (VSC). The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  3. Near-field plasmonic beam engineering with complex amplitude modulation based on metasurface

    NASA Astrophysics Data System (ADS)

    Song, Xu; Huang, Lingling; Sun, Lin; Zhang, Xiaomeng; Zhao, Ruizhe; Li, Xiaowei; Wang, Jia; Bai, Benfeng; Wang, Yongtian

    2018-02-01

    Metasurfaces have recently intrigued extensive interest due to their ability to locally manipulate electromagnetic waves, which provide great feasibility for tailoring both propagation waves and surface plasmon polaritons (SPPs). Manipulation of SPPs with arbitrary complex fields is an important issue in integrated nanophotonics due to their capability of guiding waves with subwavelength footprints. Here, an approach with metasurfaces composed of nanoaperture arrays is proposed and experimentally demonstrated which can effectively manipulate the complex amplitude of SPPs in the near-field regime. Tailoring the azimuthal angles of individual nanoapertures and simultaneously tuning their geometric parameters, the phase and amplitude are controlled based on the Pancharatnam-Berry phases and their individual transmission coefficients. For the verification of the concept, Airy plasmons and axisymmetric Airy-SPPs are generated. The results of numerical simulations and near-field imaging are consistent with each other. Besides the rigorous simulations, we applied a 2D dipole analysis for additional analysis. This strategy of complex amplitude manipulation with metasurfaces can be used for potential applications in plasmonic beam shaping, integrated optoelectronic systems, and surface wave holography.

  4. Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production

    NASA Technical Reports Server (NTRS)

    Whitlow, Jonathan E.

    2000-01-01

    This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.

  5. Steady-state distributions of probability fluxes on complex networks

    NASA Astrophysics Data System (ADS)

    Chełminiak, Przemysław; Kurzyński, Michał

    2017-02-01

    We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.

  6. The Osher scheme for real gases

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1990-01-01

    An extension of Osher's approximate Riemann solver to include gases with an arbitrary equation of state is presented. By a judicious choice of thermodynamic variables, the Riemann invariats are reduced to quadratures which are then approximated numerically. The extension is rigorous and does not involve any further assumptions or approximations over the ideal gas case. Numerical results are presented to demonstrate the feasibility and accuracy of the proposed method.

  7. Numerical study of wave propagation around an underground cavity: acoustic case

    NASA Astrophysics Data System (ADS)

    Esterhazy, Sofi; Perugia, Ilaria; Schöberl, Joachim; Bokelmann, Götz

    2015-04-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explosion/weapon testing, we aim to provide a basic numerical study of the wave propagation around and inside such an underground cavity. The aim of the CTBTO is to ban all nuclear explosions of any size anywhere, by anyone. Therefore, it is essential to build a powerful strategy to efficiently investigate and detect critical signatures such as gas filled cavities, rubble zones and fracture networks below the surface. One method to investigate the geophysical properties of an underground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as 'resonance seismometry' - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and there are also only few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in the mathematical understanding of the underlying physical phenomena. Here, we focus our numerical study on the propagation of P-waves in two dimensions. An extension to three dimensions as well as an inclusion of the full elastic wave field is planned in the following. For the numerical simulations of wave propagation we use a high order finite element discretization which has the significant advantage that it can be extended easily from simple toy designs to complex and irregularly shaped geometries without excessive effort. Our computations are done with the parallel Finite Element Library NGSOLVE ontop of the automatic 2D/3D tetrahedral mesh generator NETGEN (http://sourceforge.net/projects/ngsolve/). Using the basic mathematical understanding of the physical equations and the numerical algorithms it is possible for us to investigate the wave field over a large bandwidth of wave numbers. This means we can apply our calculations for a wide range of parameters, while keeping the numerical error explicitly under control. The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  8. Perspective: Optical measurement of feature dimensions and shapes by scatterometry

    NASA Astrophysics Data System (ADS)

    Diebold, Alain C.; Antonelli, Andy; Keller, Nick

    2018-05-01

    The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.

  9. Fire Suppression in Low Gravity Using a Cup Burner

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2004-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches. The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.

  10. Fire Suppression in Low Gravity Using a Cup Burner

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2004-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion-suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.

  11. Maximization of permanent trapping of CO{sub 2} and co-contaminants in the highest-porosity formations of the Rock Springs Uplift (Southwest Wyoming): experimentation and multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piri, Mohammad

    2014-03-31

    Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-­brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account themore » underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-­conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-­based dynamic core-­scale pore network model; (4) Development of new, improved high-­performance modules for the UW-­team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore-­ and core-­scale models were rigorously validated against well-­characterized core-­ flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-­resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.« less

  12. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  13. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  14. Conflict: Operational Realism versus Analytical Rigor in Defense Modeling and Simulation

    DTIC Science & Technology

    2012-06-14

    Campbell, Experimental and Quasi- Eperimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin Company, 2002. [7] R. T. Johnson, G...experimentation? In order for an experiment to be considered rigorous, and the results valid, the experiment should be designed using established...addition to the interview, the pilots were administered a written survey, designed to capture their reactions regarding the level of realism present

  15. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, Part-II: Models And Procedures (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.

  16. Atomistic Free Energy Model for Nucleic Acids: Simulations of Single-Stranded DNA and the Entropy Landscape of RNA Stem-Loop Structures.

    PubMed

    Mak, Chi H

    2015-11-25

    While single-stranded (ss) segments of DNAs and RNAs are ubiquitous in biology, details about their structures have only recently begun to emerge. To study ssDNA and RNAs, we have developed a new Monte Carlo (MC) simulation using a free energy model for nucleic acids that has the atomisitic accuracy to capture fine molecular details of the sugar-phosphate backbone. Formulated on the basis of a first-principle calculation of the conformational entropy of the nucleic acid chain, this free energy model correctly reproduced both the long and short length-scale structural properties of ssDNA and RNAs in a rigorous comparison against recent data from fluorescence resonance energy transfer, small-angle X-ray scattering, force spectroscopy and fluorescence correlation transport measurements on sequences up to ∼100 nucleotides long. With this new MC algorithm, we conducted a comprehensive investigation of the entropy landscape of small RNA stem-loop structures. From a simulated ensemble of ∼10(6) equilibrium conformations, the entropy for the initiation of different size RNA hairpin loops was computed and compared against thermodynamic measurements. Starting from seeded hairpin loops, constrained MC simulations were then used to estimate the entropic costs associated with propagation of the stem. The numerical results provide new direct molecular insights into thermodynaimc measurement from macroscopic calorimetry and melting experiments.

  17. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  18. Steady-state and dynamic models for particle engulfment during solidification

    NASA Astrophysics Data System (ADS)

    Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.

    2016-06-01

    Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.

  19. Experimental and theoretical study of light scattering by individual mature red blood cells by use of scanning flow cytometry and a discrete dipole approximation.

    PubMed

    Yurkin, Maxim A; Semyanov, Konstantin A; Tarasov, Peter A; Chernyshev, Andrei V; Hoekstra, Alfons G; Maltsev, Valeri P

    2005-09-01

    Elastic light scattering by mature red blood cells (RBCs) was theoretically and experimentally analyzed by use of the discrete dipole approximation (DDA) and scanning flow cytometry (SFC), respectively. SFC permits measurement of the angular dependence of the light-scattering intensity (indicatrix) of single particles. A mature RBC is modeled as a biconcave disk in DDA simulations of light scattering. We have studied the effect of RBC orientation related to the direction of the light incident upon the indicatrix. Numerical calculations of indicatrices for several axis ratios and volumes of RBC have been carried out. Comparison of the simulated indicatrices and indicatrices measured by SFC showed good agreement, validating the biconcave disk model for a mature RBC. We simulated the light-scattering output signals from the SFC with the DDA for RBCs modeled as a disk-sphere and as an oblate spheroid. The biconcave disk, the disk-sphere, and the oblate spheroid models have been compared for two orientations, i.e., face-on and rim-on incidence, relative to the direction of the incident beam. Only the oblate spheroid model for rim-on incidence gives results similar to those of the rigorous biconcave disk model.

  20. Brunet-Derrida Behavior of Branching-Selection Particle Systems on the Line

    NASA Astrophysics Data System (ADS)

    Bérard, Jean; Gouéré, Jean-Baptiste

    2010-09-01

    We consider a class of branching-selection particle systems on {mathbb{R}} similar to the one considered by E. Brunet and B. Derrida in their 1997 paper “Shift in the velocity of a front due to a cutoff”. Based on numerical simulations and heuristic arguments, Brunet and Derrida showed that, as the population size N of the particle system goes to infinity, the asymptotic velocity of the system converges to a limiting value at the unexpectedly slow rate (log N)-2. In this paper, we give a rigorous mathematical proof of this fact, for the class of particle systems we consider. The proof makes use of ideas and results by R. Pemantle, and by N. Gantert, Y. Hu and Z. Shi, and relies on a comparison of the particle system with a family of N independent branching random walks killed below a linear space-time barrier.

  1. Effect of impurities on optical properties of pentaerythritol tetranitrate

    NASA Astrophysics Data System (ADS)

    Tsyshevskiy, Roman; Sharia, Onise; Kuklja, Maija M.

    2012-03-01

    Despite numerous efforts, the electronic nature of initiation of high explosives to detonation in general and mechanisms of their sensitivity to laser initiation in particular are far from being completely understood. Recent experiments show that Nd:YAG laser irradiation (at 1064nm) causes resonance explosive decomposition of PETN samples. In an attempt to shed some light on electronic excitations and to develop a rigorous interpretation to these experiments, the electronic structure and optical properties of PETN and a series of common impurities were studied. Band gaps (S0→S1) and optical singlet-triplet (S0→T1) transitions in both an ideal material and PETN containing various defects were simulated by means of state-of-the-art quantum-chemical computational techniques. It was shown that the presence of impurities in the PETN crystal causes significant narrowing of the band gap. The structure and role of molecular excitons in PETN are discussed.

  2. Fluctuation Theorem for Many-Body Pure Quantum States.

    PubMed

    Iyoda, Eiki; Kaneko, Kazuya; Sagawa, Takahiro

    2017-09-08

    We prove the second law of thermodynamics and the nonequilibrium fluctuation theorem for pure quantum states. The entire system obeys reversible unitary dynamics, where the initial state of the heat bath is not the canonical distribution but is a single energy eigenstate that satisfies the eigenstate-thermalization hypothesis. Our result is mathematically rigorous and based on the Lieb-Robinson bound, which gives the upper bound of the velocity of information propagation in many-body quantum systems. The entanglement entropy of a subsystem is shown connected to thermodynamic heat, highlighting the foundation of the information-thermodynamics link. We confirmed our theory by numerical simulation of hard-core bosons, and observed dynamical crossover from thermal fluctuations to bare quantum fluctuations. Our result reveals a universal scenario that the second law emerges from quantum mechanics, and can be experimentally tested by artificial isolated quantum systems such as ultracold atoms.

  3. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2017-12-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  4. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics.

    PubMed

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, Enzo; Laufs, Helmut; Lacasa, Lucas

    2018-02-23

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.

  5. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2018-06-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  6. Pattern Formation in Keller-Segel Chemotaxis Models with Logistic Growth

    NASA Astrophysics Data System (ADS)

    Jin, Ling; Wang, Qi; Zhang, Zengyan

    In this paper, we investigate pattern formation in Keller-Segel chemotaxis models over a multidimensional bounded domain subject to homogeneous Neumann boundary conditions. It is shown that the positive homogeneous steady state loses its stability as chemoattraction rate χ increases. Then using Crandall-Rabinowitz local theory with χ being the bifurcation parameter, we obtain the existence of nonhomogeneous steady states of the system which bifurcate from this homogeneous steady state. Stability of the bifurcating solutions is also established through rigorous and detailed calculations. Our results provide a selection mechanism of stable wavemode which states that the only stable bifurcation branch must have a wavemode number that minimizes the bifurcation value. Finally, we perform extensive numerical simulations on the formation of stable steady states with striking structures such as boundary spikes, interior spikes, stripes, etc. These nontrivial patterns can model cellular aggregation that develop through chemotactic movements in biological systems.

  7. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.

  8. Network-based stochastic semisupervised learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.

  9. DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk

    The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.

  10. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  11. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    NASA Astrophysics Data System (ADS)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  12. Surface-plasmon mediated total absorption of light into silicon.

    PubMed

    Yoon, Jae Woong; Park, Woo Jae; Lee, Kyu Jin; Song, Seok Ho; Magnusson, Robert

    2011-10-10

    We report surface-plasmon mediated total absorption of light into a silicon substrate. For an Au grating on Si, we experimentally show that a surface-plasmon polariton (SPP) excited on the air/Au interface leads to total absorption with a rate nearly 10 times larger than the ohmic damping rate of collectively oscillating free electrons in the Au film. Rigorous numerical simulations show that the SPP resonantly enhances forward diffraction of light to multiple orders of lossy waves in the Si substrate with reflection and ohmic absorption in the Au film being negligible. The measured reflection and phase spectra reveal a quantitative relation between the peak absorbance and the associated reflection phase change, implying a resonant interference contribution to this effect. An analytic model of a dissipative quasi-bound resonator provides a general formula for the resonant absorbance-phase relation in excellent agreement with the experimental results.

  13. Fluctuation Theorem for Many-Body Pure Quantum States

    NASA Astrophysics Data System (ADS)

    Iyoda, Eiki; Kaneko, Kazuya; Sagawa, Takahiro

    2017-09-01

    We prove the second law of thermodynamics and the nonequilibrium fluctuation theorem for pure quantum states. The entire system obeys reversible unitary dynamics, where the initial state of the heat bath is not the canonical distribution but is a single energy eigenstate that satisfies the eigenstate-thermalization hypothesis. Our result is mathematically rigorous and based on the Lieb-Robinson bound, which gives the upper bound of the velocity of information propagation in many-body quantum systems. The entanglement entropy of a subsystem is shown connected to thermodynamic heat, highlighting the foundation of the information-thermodynamics link. We confirmed our theory by numerical simulation of hard-core bosons, and observed dynamical crossover from thermal fluctuations to bare quantum fluctuations. Our result reveals a universal scenario that the second law emerges from quantum mechanics, and can be experimentally tested by artificial isolated quantum systems such as ultracold atoms.

  14. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  15. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  16. A highly accurate analytical solution for the surface fields of a short vertical wire antenna lying on a multilayer ground

    NASA Astrophysics Data System (ADS)

    Parise, M.

    2018-01-01

    A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.

  17. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  18. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  19. Observations and Numerical Modeling of the 2012 Haida Gwaii Tsunami off the Coast of British Columbia

    NASA Astrophysics Data System (ADS)

    Fine, Isaac V.; Cherniawsky, Josef Y.; Thomson, Richard E.; Rabinovich, Alexander B.; Krassovski, Maxim V.

    2015-03-01

    A major ( M w 7.7) earthquake occurred on October 28, 2012 along the Queen Charlotte Fault Zone off the west coast of Haida Gwaii (formerly the Queen Charlotte Islands). The earthquake was the second strongest instrumentally recorded earthquake in Canadian history and generated the largest local tsunami ever recorded on the coast of British Columbia. A field survey on the Pacific side of Haida Gwaii revealed maximum runup heights of up to 7.6 m at sites sheltered from storm waves and 13 m in a small inlet that is less sheltered from storms (L eonard and B ednarski 2014). The tsunami was recorded by tide gauges along the coast of British Columbia, by open-ocean bottom pressure sensors of the NEPTUNE facility at Ocean Networks Canada's cabled observatory located seaward of southwestern Vancouver Island, and by several DART stations located in the northeast Pacific. The tsunami observations, in combination with rigorous numerical modeling, enabled us to determine the physical properties of this event and to correct the location of the tsunami source with respect to the initial geophysical estimates. The initial model results were used to specify sites of particular interest for post-tsunami field surveys on the coast of Moresby Island (Haida Gwaii), while field survey observations (L eonard and B ednarski 2014) were used, in turn, to verify the numerical simulations based on the corrected source region.

  20. Using Computational and Mechanical Models to Study Animal Locomotion

    PubMed Central

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026

  1. Rigorous Electromagnetic Analysis of the Focusing Action of Refractive Cylindrical Microlens

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Gu, Ben-Yuan; Dong, Bi-Zhen; Yang, Guo-Zhen

    The focusing action of refractive cylindrical microlens is investigated based on the rigorous electromagnetic theory with the use of the boundary element method. The focusing behaviors of these refractive microlenses with continuous and multilevel surface-envelope are characterized in terms of total electric-field patterns, the electric-field intensity distributions on the focal plane, and their diffractive efficiencies at the focal spots. The obtained results are also compared with the ones obtained by Kirchhoff's scalar diffraction theory. The present numerical and graphical results may provide useful information for the analysis and design of refractive elements in micro-optics.

  2. Imaging 2D optical diffuse reflectance in skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, Janaka; Yao, Gang

    2007-04-01

    We discovered a unique pattern of optical reflectance from fresh prerigor skeletal muscles, which can not be described using existing theories. A numerical fitting function was developed to quantify the equiintensity contours of acquired reflectance images. Using this model, we studied the changes of reflectance profile during stretching and rigor process. We found that the prominent anisotropic features diminished after rigor completion. These results suggested that muscle sarcomere structures played important roles in modulating light propagation in whole muscle. When incorporating the sarcomere diffraction in a Monte Carlo model, we showed that the resulting reflectance profiles quantitatively resembled the experimental observation.

  3. Large eddy simulation of forest canopy flow for wildland fire modeling

    Treesearch

    Eric Mueller; William Mell; Albert Simeoni

    2014-01-01

    Large eddy simulation (LES) based computational fluid dynamics (CFD) simulators have obtained increasing attention in the wildland fire research community, as these tools allow the inclusion of important driving physics. However, due to the complexity of the models, individual aspects must be isolated and tested rigorously to ensure meaningful results. As wind is a...

  4. Collisional damping rates for plasma waves

    NASA Astrophysics Data System (ADS)

    Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.

    2016-06-01

    The distinction between the plasma dynamics dominated by collisional transport versus collective processes has never been rigorously addressed until recently. A recent paper [P. H. Yoon et al., Phys. Rev. E 93, 033203 (2016)] formulates for the first time, a unified kinetic theory in which collective processes and collisional dynamics are systematically incorporated from first principles. One of the outcomes of such a formalism is the rigorous derivation of collisional damping rates for Langmuir and ion-acoustic waves, which can be contrasted to the heuristic customary approach. However, the results are given only in formal mathematical expressions. The present brief communication numerically evaluates the rigorous collisional damping rates by considering the case of plasma particles with Maxwellian velocity distribution function so as to assess the consequence of the rigorous formalism in a quantitative manner. Comparison with the heuristic ("Spitzer") formula shows that the accurate damping rates are much lower in magnitude than the conventional expression, which implies that the traditional approach over-estimates the importance of attenuation of plasma waves by collisional relaxation process. Such a finding may have a wide applicability ranging from laboratory to space and astrophysical plasmas.

  5. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence

    PubMed Central

    Kelly, David; Majda, Andrew J.; Tong, Xin T.

    2015-01-01

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335

  6. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    PubMed

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  7. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    NASA Astrophysics Data System (ADS)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  8. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  9. Coarse-Grained Lattice Model Simulations of Sequence-Structure Fitness of a Ribosome-Inactivating Protein

    DTIC Science & Technology

    2007-11-05

    limits of what is considered practical when applying all-atom molecular - dynamics simulation methods. Lattice models provide computationally robust...of expectation values from the density of states. All-atom molecular - dynamics simulations provide the most rigorous sampling method to generate con... molecular - dynamics simulations of protein folding,6–9 reported studies of computing a heat capacity or other calorimetric observables have been limited to

  10. Biochemical and hemodynamic changes in normal subjects during acute and rigorous bed rest and ambulation

    NASA Astrophysics Data System (ADS)

    Zorbas, Yan G.; Kakurin, Vassily J.; Afonin, Victor B.; Yarullin, Vladimir L.

    2002-06-01

    Rigorous bed rest (RBR) induces significant biochemical and circulatory changes. However, little is known about acute rigorous bed rest (ARBR). Measuring biochemical and circulatory variables during ARBR and RBR the aim of this study was to establish the significance of ARBR effect. Studies were done during 3 days of a pre-bed rest (BR) period and during 7 days of ARBR and RBR period. Thirty normal male individuals aged, 24.1±6.3 years were chosen as subjects. They were divided equally into three groups: 10 subjects placed under active control conditions served as unrestricted ambulatory control subjects (UACS), 10 subjects submitted to an acute rigorous bed rest served as acute rigorous bed rested subjects (ARBRS) and 10 subjects submitted to a rigorous bed rest served as rigorous bed rested subjects (RBRS). The UACS were maintained under an average running distance of 9.7 km day -1. For the ARBR effect simulation, ARBRS were submitted abruptly to BR for 7 days. They did not have any prior knowledge of the exact date and time when they would be asked to confine to RBR. For the RBR effect simulation, RBRS were subjected to BR for 7 days on a predetermined date and time known to them right away from the start of the study. Plasma renin activity (PRA), plasma cortisol (PC), plasma aldosterone (PA), plasma and urinary sodium (Na) and potassium (K) levels, heart rate (HR), cardiac output (CO), and arterial blood pressure (ABP) increased significantly, and urinary aldosterone (UA), stroke volume (SV) and plasma volume (PV) decreased significantly ( p<0.05) in ARBRS and RBRS as compared with their pre-BR values and the values in UACS. Electrolyte, hormonal and hemodynamic responses were significantly ( p<0.05) greater and occurred significantly faster ( p<0.05) during ARBR than RBR. Parameters change insignificantly ( p>0.05) in UACS compared with pre-BR control values. It was concluded that, the more abruptly muscular activity is restricted in experimental subjects while they are very active, the greater hemodynamic and biochemical change there is and probably in individuals whose muscular activity is abruptly terminated after an accident or sudden illness.

  11. Toward a physics-based rate and state friction law for earthquake nucleation processes in fault zones with granular gouge

    NASA Astrophysics Data System (ADS)

    Ferdowsi, B.; Rubin, A. M.

    2017-12-01

    Numerical simulations of earthquake nucleation rely on constitutive rate and state evolution laws to model earthquake initiation and propagation processes. The response of different state evolution laws to large velocity increases is an important feature of these constitutive relations that can significantly change the style of earthquake nucleation in numerical models. However, currently there is not a rigorous understanding of the physical origins of the response of bare rock or gouge-filled fault zones to large velocity increases. This in turn hinders our ability to design physics-based friction laws that can appropriately describe those responses. We here argue that most fault zones form a granular gouge after an initial shearing phase and that it is the behavior of the gouge layer that controls the fault friction. We perform numerical experiments of a confined sheared granular gouge under a range of confining stresses and driving velocities relevant to fault zones and apply 1-3 order of magnitude velocity steps to explore dynamical behavior of the system from grain- to macro-scales. We compare our numerical observations with experimental data from biaxial double-direct-shear fault gouge experiments under equivalent loading and driving conditions. Our intention is to first investigate the degree to which these numerical experiments, with Hertzian normal and Coulomb friction laws at the grain-grain contact scale and without any time-dependent plasticity, can reproduce experimental fault gouge behavior. We next compare the behavior observed in numerical experiments with predictions of the Dieterich (Aging) and Ruina (Slip) friction laws. Finally, the numerical observations at the grain and meso-scales will be used for designing a rate and state evolution law that takes into account recent advances in rheology of granular systems, including local and non-local effects, for a wide range of shear rates and slow and fast deformation regimes of the fault gouge.

  12. Television camera as a scientific instrument

    NASA Technical Reports Server (NTRS)

    Smokler, M. I.

    1970-01-01

    Rigorous calibration program, coupled with a sophisticated data-processing program that introduced compensation for system response to correct photometry, geometric linearity, and resolution, converted a television camera to a quantitative measuring instrument. The output data are in the forms of both numeric printout records and photographs.

  13. Components of Students' Grade Expectations for Public Speaking Assignments

    ERIC Educational Resources Information Center

    Larseingue, Matt; Sawyer, Chris R.; Finn, Amber N.

    2012-01-01

    Although previous research has linked students' expected grades to numerous pedagogical variables, this factor has been all but ignored by instructional communication scholars. In the present study, 315 undergraduates were presented with grading scenarios representing differing combinations of course rigor, teacher immediacy, and student…

  14. Snoring and its management.

    PubMed

    Calhoun, Karen H; Templer, Jerry; Patenaude, Bart

    2006-01-01

    There are numerous strategies, devices and procedures available to treat snoring. The surgical procedures have an overall success rate of 60-70%, but this probably decreases over time, especially if there is weight gain. There are no long-term rigorously-designed studies comparing the various procedures for decreasing snoring.

  15. Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow

    NASA Astrophysics Data System (ADS)

    Barker, Blake

    2014-10-01

    We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.

  16. Resonant tunneling assisted propagation and amplification of plasmons in high electron mobility transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhardwaj, Shubhendu; Sensale-Rodriguez, Berardi; Xing, Huili Grace

    A rigorous theoretical and computational model is developed for the plasma-wave propagation in high electron mobility transistor structures with electron injection from a resonant tunneling diode at the gate. We discuss the conditions in which low-loss and sustainable plasmon modes can be supported in such structures. The developed analytical model is used to derive the dispersion relation for these plasmon-modes. A non-linear full-wave-hydrodynamic numerical solver is also developed using a finite difference time domain algorithm. The developed analytical solutions are validated via the numerical solution. We also verify previous observations that were based on a simplified transmission line model. Itmore » is shown that at high levels of negative differential conductance, plasmon amplification is indeed possible. The proposed rigorous models can enable accurate design and optimization of practical resonant tunnel diode-based plasma-wave devices for terahertz sources, mixers, and detectors, by allowing a precise representation of their coupling when integrated with other electromagnetic structures.« less

  17. Thermo-electrochemical evaluation of lithium-ion batteries for space applications

    NASA Astrophysics Data System (ADS)

    Walker, W.; Yayathi, S.; Shaw, J.; Ardebili, H.

    2015-12-01

    Advanced energy storage and power management systems designed through rigorous materials selection, testing and analysis processes are essential to ensuring mission longevity and success for space exploration applications. Comprehensive testing of Boston Power Swing 5300 lithium-ion (Li-ion) cells utilized by the National Aeronautics and Space Administration (NASA) to power humanoid robot Robonaut 2 (R2) is conducted to support the development of a test-correlated Thermal Desktop (TD) Systems Improved Numerical Differencing Analyzer (SINDA) (TD-S) model for evaluation of power system thermal performance. Temperature, current, working voltage and open circuit voltage measurements are taken during nominal charge-discharge operations to provide necessary characterization of the Swing 5300 cells for TD-S model correlation. Building from test data, embedded FORTRAN statements directly simulate Ohmic heat generation of the cells during charge-discharge as a function of surrounding temperature, local cell temperature and state of charge. The unique capability gained by using TD-S is demonstrated by simulating R2 battery thermal performance in example orbital environments for hypothetical extra-vehicular activities (EVA) exterior to a small satellite. Results provide necessary demonstration of this TD-S technique for thermo-electrochemical analysis of Li-ion cells operating in space environments.

  18. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  19. Nonlinear asymmetric tearing mode evolution in cylindrical geometry

    DOE PAGES

    Teng, Qian; Ferraro, N.; Gates, David A.; ...

    2016-10-27

    The growth of a tearing mode is described by reduced MHD equations. For a cylindrical equilibrium, tearing mode growth is governed by the modified Rutherford equation, i.e., the nonlinear Δ'(w). For a low beta plasma without external heating, Δ'(w) can be approximately described by two terms, Δ' ql(w), Δ'A(w). In this work, we present a simple method to calculate the quasilinear stability index Δ'ql rigorously, for poloidal mode number m ≥ 2. Δ' ql is derived by solving the outer equation through the Frobenius method. Δ'ql is composed of four terms proportional to: constant Δ' 0, w, wlnw, and w2.more » Δ' A is proportional to the asymmetry of island that is roughly proportional to w. The sum of Δ' ql and Δ' A is consistent with the more accurate expression calculated perturbatively. The reduced MHD equations are also solved numerically through a 3D MHD code M3D-C1. The analytical expression of the perturbed helical flux and the saturated island width agree with the simulation results. Lastly, it is also confirmed by the simulation that the Δ' A has to be considered in calculating island saturation.« less

  20. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  1. Space radiator simulation manual for computer code

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.

  2. Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere

    NASA Technical Reports Server (NTRS)

    Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.

    1975-01-01

    The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.

  3. A Computational Model of Coupled Multiphase Flow and Geomechanics to Study Fault Slip and Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Juanes, R.; Jha, B.

    2014-12-01

    The coupling between subsurface flow and geomechanical deformation is critical in the assessment of the environmental impacts of groundwater use, underground liquid waste disposal, geologic storage of carbon dioxide, and exploitation of shale gas reserves. In particular, seismicity induced by fluid injection and withdrawal has emerged as a central element of the scientific discussion around subsurface technologies that tap into water and energy resources. Here we present a new computational approach to model coupled multiphase flow and geomechanics of faulted reservoirs. We represent faults as surfaces embedded in a three-dimensional medium by using zero-thickness interface elements to accurately model fault slip under dynamically evolving fluid pressure and fault strength. We incorporate the effect of fluid pressures from multiphase flow in the mechanical stability of faults and employ a rigorous formulation of nonlinear multiphase geomechanics that is capable of handling strong capillary effects. We develop a numerical simulation tool by coupling a multiphase flow simulator with a mechanics simulator, using the unconditionally stable fixed-stress scheme for the sequential solution of two-way coupling between flow and geomechanics. We validate our modeling approach using several synthetic, but realistic, test cases that illustrate the onset and evolution of earthquakes from fluid injection and withdrawal. We also present the application of the coupled flow-geomechanics simulation technology to the post mortem analysis of the Mw=5.1, May 2011 Lorca earthquake in south-east Spain, and assess the potential that the earthquake was induced by groundwater extraction.

  4. GOCE gravity field simulation based on actual mission scenario

    NASA Astrophysics Data System (ADS)

    Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.

    2009-04-01

    In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.

  5. A 2-D numerical simulation study on longitudinal solute transport and longitudinal dispersion coefficient

    NASA Astrophysics Data System (ADS)

    Zhang, Wei

    2011-07-01

    The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.

  6. Finite element models of the human shoulder complex: a review of their clinical implications and modelling techniques.

    PubMed

    Zheng, Manxu; Zou, Zhenmin; Bartolo, Paulo Jorge Da Silva; Peach, Chris; Ren, Lei

    2017-02-01

    The human shoulder is a complicated musculoskeletal structure and is a perfect compromise between mobility and stability. The objective of this paper is to provide a thorough review of previous finite element (FE) studies in biomechanics of the human shoulder complex. Those FE studies to investigate shoulder biomechanics have been reviewed according to the physiological and clinical problems addressed: glenohumeral joint stability, rotator cuff tears, joint capsular and labral defects and shoulder arthroplasty. The major findings, limitations, potential clinical applications and modelling techniques of those FE studies are critically discussed. The main challenges faced in order to accurately represent the realistic physiological functions of the shoulder mechanism in FE simulations involve (1) subject-specific representation of the anisotropic nonhomogeneous material properties of the shoulder tissues in both healthy and pathological conditions; (2) definition of boundary and loading conditions based on individualised physiological data; (3) more comprehensive modelling describing the whole shoulder complex including appropriate three-dimensional (3D) representation of all major shoulder hard tissues and soft tissues and their delicate interactions; (4) rigorous in vivo experimental validation of FE simulation results. Fully validated shoulder FE models would greatly enhance our understanding of the aetiology of shoulder disorders, and hence facilitate the development of more efficient clinical diagnoses, non-surgical and surgical treatments, as well as shoulder orthotics and prosthetics. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  7. Horseshoes in a Chaotic System with Only One Stable Equilibrium

    NASA Astrophysics Data System (ADS)

    Huan, Songmei; Li, Qingdu; Yang, Xiao-Song

    To confirm the numerically demonstrated chaotic behavior in a chaotic system with only one stable equilibrium reported by Wang and Chen, we resort to Poincaré map technique and present a rigorous computer-assisted verification of horseshoe chaos by virtue of topological horseshoes theory.

  8. A composite numerical model for assessing subsurface transport of oily wastes and chemical constituents

    NASA Astrophysics Data System (ADS)

    Panday, S.; Wu, Y. S.; Huyakorn, P. S.; Wade, S. C.; Saleem, Z. A.

    1997-02-01

    Subsurface fate and transport models are utilized to predict concentrations of chemicals leaching from wastes into downgradient receptor wells. The contaminant concentrations in groundwater provide a measure of the risk to human health and the environment. The level of potential risk is currently used by the U.S. Environmental Protection Agency to determine whether management of the wastes should conform to hazardous waste management standards. It is important that the transport and fate of contaminants is simulated realistically. Most models in common use are inappropriate for simulating the migration of wastes containing significant fractions of nonaqueous-phase liquids (NAPLs). The migration of NAPL and its dissolved constituents may not be reliably predicted using conventional aqueous-phase transport simulations. To overcome this deficiency, an efficient and robust regulatory assessment model incorporating multiphase flow and transport in the unsaturated and saturated zones of the subsurface environment has been developed. The proposed composite model takes into account all of the major transport processes including infiltration and ambient flow of NAPL, entrapment of residual NAPL, adsorption, volatilization, degradation, dissolution of chemical constituents, and transport by advection and hydrodynamic dispersion. Conceptually, the subsurface is treated as a composite unsaturated zone-saturated zone system. The composite simulator consists of three major interconnected computational modules representing the following components of the migration pathway: (1) vertical multiphase flow and transport in the unsaturated zone; (2) areal movement of the free-product lens in the saturated zone with vertical equilibrium; and (3) three-dimensional aqueous-phase transport of dissolved chemicals in ambient groundwater. Such a composite model configuration promotes computational efficiency and robustness (desirable for regulatory assessment applications). Two examples are presented to demonstrate the model verification and a site application. Simulation results obtained using the composite modeling approach are compared with a rigorous numerical solution and field observations of crude oil saturations and plume concentrations of total dissolved organic carbon at a spill site in Minnesota, U.S.A. These comparisons demonstrate the ability of the present model to provide realistic depiction of field-scale situations.

  9. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Treesearch

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  10. Mathematical and Numerical Analysis of Model Equations on Interactions of the HIV/AIDS Virus and the Immune System

    NASA Astrophysics Data System (ADS)

    Parumasur, N.; Willie, R.

    2008-09-01

    We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.

  11. Invariant Tori in the Secular Motions of the Three-body Planetary Systems

    NASA Astrophysics Data System (ADS)

    Locatelli, Ugo; Giorgilli, Antonio

    We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.

  12. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  13. PROPOSED SIAM PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAILEY, DAVID H.; BORWEIN, JONATHAN M.

    A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidler, Rolf, E-mail: rsidler@gmail.com; Carcione, José M.; Holliger, Klaus

    We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in themore » radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.« less

  15. A Computer Model for Teaching the Dynamic Behavior of AC Contactors

    ERIC Educational Resources Information Center

    Ruiz, J.-R. R.; Espinosa, A. G.; Romeral, L.

    2010-01-01

    Ac-powered contactors are extensively used in industry in applications such as automatic electrical devices, motor starters, and heaters. In this work, a practical session that allows students to model and simulate the dynamic behavior of ac-powered electromechanical contactors is presented. Simulation is carried out using a rigorous parametric…

  16. A Phenomenological Analysis of Division III Student-Athletes' Transition out of College

    ERIC Educational Resources Information Center

    Covington, Sim Jonathan, Jr.

    2017-01-01

    Intercollegiate athletics is a major segment of numerous college and university communities across America today. Student-athletes participate in strenuous training and competition throughout their college years while managing to balance the rigorous academic curriculum of the higher education environment. This research aims to explore the…

  17. Predicting Observer Training Satisfaction and Certification

    ERIC Educational Resources Information Center

    Bell, Courtney A.; Jones, Nathan D.; Lewis, Jennifer M.; Liu, Shuangshuang

    2013-01-01

    The last decade produced numerous studies that show that students learn more from high-quality teachers than they do from lower quality teachers. If instruction is to improve through the use of more rigorous teacher evaluation systems, the implementation of these systems must provide consistent and interpretable information about which aspects of…

  18. A Practical Guide to Regression Discontinuity

    ERIC Educational Resources Information Center

    Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard

    2012-01-01

    Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…

  19. Randomized Trial of Hyperbaric Oxygen Therapy for Children with Autism

    ERIC Educational Resources Information Center

    Granpeesheh, Doreen; Tarbox, Jonathan; Dixon, Dennis R.; Wilke, Arthur E.; Allen, Michael S.; Bradstreet, James Jeffrey

    2010-01-01

    Autism Spectrum Disorders (ASDs) are characterized by the presence of impaired development in social interaction and communication and the presence of a restricted repertoire of activity and interests. While numerous treatments for ASDs have been proposed, very few have been subjected to rigorous scientific investigation. Hyperbaric oxygen therapy…

  20. How to Teach Hicksian Compensation and Duality Using a Spreadsheet Optimizer

    ERIC Educational Resources Information Center

    Ghosh, Satyajit; Ghosh, Sarah

    2007-01-01

    Principle of duality and numerical calculation of income and substitution effects under Hicksian Compensation are often left out of intermediate microeconomics courses because they require a rigorous calculus based analysis. But these topics are critically important for understanding consumer behavior. In this paper we use excel solver--a…

  1. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  2. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  3. Determination of thermal wave reflection coefficient to better estimate defect depth using pulsed thermography

    NASA Astrophysics Data System (ADS)

    Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn

    2017-11-01

    Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.

  4. Subsurface defect detection in first layer of pavement structure and reinforced civil engineering structure by FRP bonding using active infrared thermography

    NASA Astrophysics Data System (ADS)

    Dumoulin, Jean; Ibos, Laurent

    2010-05-01

    In many countries road network ages while road traffic and maintenance costs increase. Nowadays, thousand and thousand kilometers of roads are each year submitted to surface distress survey. They generally lean on pavement surface imaging measurement techniques, mainly in the visible spectrum, coupled with visual inspection or image processing detection of emergent distresses. Nevertheless, optimisation of maintenance works and costs requires an early detection of defects within the pavement structure when they still are hidden from surface. Accordingly, alternative measurement techniques for pavement monitoring are currently under investigation (seismic methods, step frequency radar). On the other hand, strengthening or retrofitting of reinforced concrete structures by externally bonded Fiber Reinforced Polymer (FRP) systems is now a commonly accepted and widespread technique. However, the use of bonding techniques always implies following rigorous installing procedures. To ensure the durability and long-term performance of the FRP reinforcements, conformance checking through an in situ auscultation of the bonded FRP systems is then highly suitable. The quality-control program should involve a set of adequate inspections and tests. Visual inspection and acoustic sounding (hammer tap) are commonly used to detect delaminations (disbonds) but are unable to provide sufficient information about the depth (in case of multilayered composite) and width of debonded areas. Consequently, rapid and efficient inspection methods are also required. Among the non destructive methods under study, active infrared thermography was investigated both for pavement and civil engineering structures through experiments in laboratory and numerical simulations, because of its ability to be also used on field. Pulse Thermography (PT), Pulse Phase Thermography (PPT) and Principal Component Thermography (PCT) approaches have been tested onto pavement samples and CFRP bonding on concrete samples in laboratory. In parallel numerical simulations have been used to generate a set of time sequence of thermal maps for simulated samples with and without subsurface defect. Using this set of experimental and simulated data different approaches (thermal contrast, FFT analysis, polynomial interpolation, singular value decomposition…) for defect location have been studied and compared. Defect depth retrieval was also studied on such data using different thermal model coupled to a direct or an inverse approach. Trials were conducted both with an uncooled and cooled infrared camera with different measurement performances. Results obtained will be discussed and analysed in the paper we plan to present. Finally, combining numerical simulations and experiments allows us discussing on the sensitivity influence of the infrared camera used to detect subsurface defects.

  5. Development and application of a standardized flow measurement uncertainty analysis framework to various low-head short-converging intake types across the United States federal hydropower fleet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Brennan T

    2015-01-01

    Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less

  6. A domain-specific design architecture for composite material design and aircraft part redesign

    NASA Technical Reports Server (NTRS)

    Punch, W. F., III; Keller, K. J.; Bond, W.; Sticklen, J.

    1992-01-01

    Advanced composites have been targeted as a 'leapfrog' technology that would provide a unique global competitive position for U.S. industry. Composites are unique in the requirements for an integrated approach to designing, manufacturing, and marketing of products developed utilizing the new materials of construction. Numerous studies extending across the entire economic spectrum of the United States from aerospace to military to durable goods have identified composites as a 'key' technology. In general there have been two approaches to composite construction: build models of a given composite materials, then determine characteristics of the material via numerical simulation and empirical testing; and experience-directed construction of fabrication plans for building composites with given properties. The first route sets a goal to capture basic understanding of a device (the composite) by use of a rigorous mathematical model; the second attempts to capture the expertise about the process of fabricating a composite (to date) at a surface level typically expressed in a rule based system. From an AI perspective, these two research lines are attacking distinctly different problems, and both tracks have current limitations. The mathematical modeling approach has yielded a wealth of data but a large number of simplifying assumptions are needed to make numerical simulation tractable. Likewise, although surface level expertise about how to build a particular composite may yield important results, recent trends in the KBS area are towards augmenting surface level problem solving with deeper level knowledge. Many of the relative advantages of composites, e.g., the strength:weight ratio, is most prominent when the entire component is designed as a unitary piece. The bottleneck in undertaking such unitary design lies in the difficulty of the re-design task. Designing the fabrication protocols for a complex-shaped, thick section composite are currently very difficult. It is in fact this difficulty that our research will address.

  7. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  8. A Rigorous Framework for Optimization of Expensive Functions by Surrogates

    NASA Technical Reports Server (NTRS)

    Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.

  9. Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.

    PubMed

    Bohley, Christian; Heuer, Jana; Stannarius, Ralf

    2005-12-01

    We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.

  10. Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian A.

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.

  11. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. High-Efficiency, Near-Diffraction Limited, Dielectric Metasurface Lenses Based on Crystalline Titanium Dioxide at Visible Wavelengths.

    PubMed

    Liang, Yaoyao; Liu, Hongzhan; Wang, Faqiang; Meng, Hongyun; Guo, Jianping; Li, Jinfeng; Wei, Zhongchao

    2018-04-28

    Metasurfaces are planar optical elements that hold promise for overcoming the limitations of refractive and conventional diffractive optics. Previous metasurfaces have been limited to transparency windows at infrared wavelengths because of significant optical absorption and loss at visible wavelengths. Here we report a polarization-insensitive, high-contrast transmissive metasurface composed of crystalline titanium dioxide pillars in the form of metalens at the wavelength of 633 nm. The focal spots are as small as 0.54 λ d , which is very close to the optical diffraction limit of 0.5 λ d . The simulation focusing efficiency is up to 88.5%. A rigorous method for metalens design, the phase realization mechanism and the trade-off between high efficiency and small spot size (or large numerical aperture) are discussed. Besides, the metalenses can work well with an imaging point source up to ±15° off axis. The proposed design is relatively systematic and can be applied to various applications such as visible imaging, ranging and sensing systems.

  13. Bunch radiation from a semi-infinite waveguide with dielectric filling inside a waveguide with larger radius

    NASA Astrophysics Data System (ADS)

    Galyamin, S. N.; Tyukhtin, A. V.; Vorobev, V. V.; Aryshev, A.

    2018-02-01

    We consider a point charge and Gaussian bunch of charged particles moving along the axis of a circular perfectly conducting pipe with uniform dielectric filling and open end. It is supposed that this semi-infinite waveguide is located in collinear infinite vacuum pipe with perfectly conducting walls and larger diameter. We deal with two cases corresponding to the open end of the inner waveguide with and without flange. Radiation produced by a charge or bunch flying from dielectric part to wide vacuum part is analyzed. We use modified residue-calculus technique and construct rigorous analytical theory describing scattered field in each sub-area of the structure. Cherenkov radiation generated in the dielectric waveguide and penetrating into the vacuum regions of the structure is of main interest throughout the present paper. We show that this part of radiation can be easily analyzed using the presented formalism. We also perform numerical simulation in CST PS code and verify the analytical results.

  14. Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating

    NASA Astrophysics Data System (ADS)

    Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed

    2016-11-01

    A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.

  15. Optical analysis of nanoparticles via enhanced backscattering facilitated by 3-D photonic nanojets

    NASA Astrophysics Data System (ADS)

    Li, Xu; Chen, Zhigang; Taflove, Allen; Backman, Vadim

    2005-01-01

    We report the phenomenon of ultra-enhanced backscattering of visible light by nanoparticles facilitated by the 3-D photonic nanojet a sub-diffraction light beam appearing at the shadow side of a plane-waveilluminated dielectric microsphere. Our rigorous numerical simulations show that backscattering intensity of nanoparticles can be enhanced up to eight orders of magnitude when locating in the nanojet. As a result, the enhanced backscattering from a nanoparticle with diameter on the order of 10 nm is well above the background signal generated by the dielectric microsphere itself. We also report that nanojet-enhanced backscattering is extremely sensitive to the size of the nanoparticle, permitting in principle resolving sub-nanometer size differences using visible light. Finally, we show how the position of a nanoparticle could be determined with subdiffractional accuracy by recording the angular distribution of the backscattered light. These properties of photonic nanojets promise to make this phenomenon a useful tool for optically detecting, differentiating, and sorting nanoparticles.

  16. Periodic waves in fiber Bragg gratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, K. W.; Merhasin, Ilya M.; Malomed, Boris A.

    2008-02-15

    We construct two families of exact periodic solutions to the standard model of fiber Bragg grating (FBG) with Kerr nonlinearity. The solutions are named ''sn'' and ''cn'' waves, according to the elliptic functions used in their analytical representation. The sn wave exists only inside the FBG's spectral bandgap, while waves of the cn type may only exist at negative frequencies ({omega}<0), both inside and outside the bandgap. In the long-wave limit, the sn and cn families recover, respectively, the ordinary gap solitons, and (unstable) antidark and dark solitons. Stability of the periodic solutions is checked by direct numerical simulations and,more » in the case of the sn family, also through the calculation of instability growth rates for small perturbations. Although, rigorously speaking, all periodic solutions are unstable, a subfamily of practically stable sn waves, with a sufficiently large spatial period and {omega}>0, is identified. However, the sn waves with {omega}<0, as well as all cn solutions, are strongly unstable.« less

  17. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  18. Time reversibility from visibility graphs of nonstationary processes

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Flanagan, Ryan

    2015-08-01

    Visibility algorithms are a family of methods to map time series into networks, with the aim of describing the structure of time series and their underlying dynamical properties in graph-theoretical terms. Here we explore some properties of both natural and horizontal visibility graphs associated to several nonstationary processes, and we pay particular attention to their capacity to assess time irreversibility. Nonstationary signals are (infinitely) irreversible by definition (independently of whether the process is Markovian or producing entropy at a positive rate), and thus the link between entropy production and time series irreversibility has only been explored in nonequilibrium stationary states. Here we show that the visibility formalism naturally induces a new working definition of time irreversibility, which allows us to quantify several degrees of irreversibility for stationary and nonstationary series, yielding finite values that can be used to efficiently assess the presence of memory and off-equilibrium dynamics in nonstationary processes without the need to differentiate or detrend them. We provide rigorous results complemented by extensive numerical simulations on several classes of stochastic processes.

  19. Physical-geometric optics method for large size faceted particles.

    PubMed

    Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong

    2017-10-02

    A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.

  20. Fermionic topological quantum states as tensor networks

    NASA Astrophysics Data System (ADS)

    Wille, C.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a tensor network formalism capable of capturing notions of topological order for quantum systems with fermionic components. At the heart of the formalism are axioms of fermionic matrix-product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models.

  1. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  2. Random sequential adsorption of straight rigid rods on a simple cubic lattice

    NASA Astrophysics Data System (ADS)

    García, G. D.; Sanchez-Varretti, F. O.; Centres, P. M.; Ramirez-Pastor, A. J.

    2015-10-01

    Random sequential adsorption of straight rigid rods of length k (k-mers) on a simple cubic lattice has been studied by numerical simulations and finite-size scaling analysis. The k-mers were irreversibly and isotropically deposited into the lattice. The calculations were performed by using a new theoretical scheme, whose accuracy was verified by comparison with rigorous analytical data. The results, obtained for k ranging from 2 to 64, revealed that (i) the jamming coverage for dimers (k = 2) is θj = 0.918388(16) . Our result corrects the previously reported value of θj = 0.799(2) (Tarasevich and Cherkasova, 2007); (ii) θj exhibits a decreasing function when it is plotted in terms of the k-mer size, being θj(∞) = 0.4045(19) the value of the limit coverage for large k's; and (iii) the ratio between percolation threshold and jamming coverage shows a non-universal behavior, monotonically decreasing to zero with increasing k.

  3. Stability switches of arbitrary high-order consensus in multiagent networks with time delays.

    PubMed

    Yang, Bo

    2013-01-01

    High-order consensus seeking, in which individual high-order dynamic agents share a consistent view of the objectives and the world in a distributed manner, finds its potential broad applications in the field of cooperative control. This paper presents stability switches analysis of arbitrary high-order consensus in multiagent networks with time delays. By employing a frequency domain method, we explicitly derive analytical equations that clarify a rigorous connection between the stability of general high-order consensus and the system parameters such as the network topology, communication time-delays, and feedback gains. Particularly, our results provide a general and a fairly precise notion of how increasing communication time-delay causes the stability switches of consensus. Furthermore, under communication constraints, the stability and robustness problems of consensus algorithms up to third order are discussed in details to illustrate our central results. Numerical examples and simulation results for fourth-order consensus are provided to demonstrate the effectiveness of our theoretical results.

  4. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  5. Validation of the ROMI-RIP rough mill simulator

    Treesearch

    Edward R. Thomas; Urs Buehlmann

    2002-01-01

    The USDA Forest Service's ROMI-RIP rough mill rip-first simulation program is a popular tool for analyzing rough mill conditions, determining more efficient rough mill practices, and finding optimal lumber board cut-up patterns. However, until now, the results generated by ROMI-RIP have not been rigorously compared to those of an actual rough mill. Validating the...

  6. Testing the skill of numerical hydraulic modeling to simulate spatiotemporal flooding patterns in the Logone floodplain, Cameroon

    NASA Astrophysics Data System (ADS)

    Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan

    2016-08-01

    Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic dynamics. These results suggest that analysis of global inundation is feasible using a sub-grid model, but that spatial patterns at sub-kilometer resolutions still need to be adequately predicted.

  7. A 3D Numerical Survey of Seismic Waves Inside and Around an Underground Cavity

    NASA Astrophysics Data System (ADS)

    Esterhazy, S.; Schneider, F. M.; Perugia, I.; Bokelmann, G.

    2016-12-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explo- sion/weapon testing, we present our findings of a numerical study on the elastic wave propagation inside and around such an underground cavity.The aim of the CTBTO is to ban all nuclear explosions of any size anywhere, by anyone. Therefore, it is essential to build a powerful strategy to efficiently investigate and detect critical signatures such as gas filled cavities, rubble zones and fracture networks below the surface. One method to investigate the geophysical properties of an under- ground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as "resonance seismometry" - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and there are also only few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in the mathematical understanding of the underlying physical phenomena.Our numerical study includes the full elastic wave field in three dimensions. We consider the effects from an in- coming plane wave as well as point source located in the surrounding of the cavity at the surface. While the former can be considered as passive source like a tele-seismic earthquake, the latter represents a man-made explosion or a viborseis as used for/in active seismic techniques. For our simulations in 3D we use the discontinuous Galerkin Spectral Element Code SPEED developed by MOX (The Laboratory for Modeling and Scientific Computing, Department of Mathematics) and DICA (Department of Civil and Environmental Engineering) at the Politecnico di Milano. The computations are carried out on the Vienna Scientific Cluster (VSC).The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  8. Beyond the Quantitative and Qualitative Divide: Research in Art Education as Border Skirmish.

    ERIC Educational Resources Information Center

    Sullivan, Graeme

    1996-01-01

    Analyzes a research project that utilizes a coherent conceptual model of art education research incorporating the demand for empirical rigor and providing for diverse interpretive frameworks. Briefly profiles the NUD*IST (Non-numerical Unstructured Data Indexing Searching and Theorizing) software system that can organize and retrieve complex…

  9. Approximation Methods for Inverse Problems Governed by Nonlinear Parabolic Systems

    DTIC Science & Technology

    1999-12-17

    We present a rigorous theoretical framework for approximation of nonlinear parabolic systems with delays in the context of inverse least squares...numerical results demonstrating the convergence are given for a model of dioxin uptake and elimination in a distributed liver model that is a special case of the general theoretical framework .

  10. Integrating Pharmacology Topics in High School Biology and Chemistry Classes Improves Performance

    ERIC Educational Resources Information Center

    Schwartz-Bloom, Rochelle D.; Halpin, Myra J.

    2003-01-01

    Although numerous programs have been developed for Grade Kindergarten through 12 science education, evaluation has been difficult owing to the inherent problems conducting controlled experiments in the typical classroom. Using a rigorous experimental design, we developed and tested a novel program containing a series of pharmacology modules (e.g.,…

  11. Exploring the Role of Executive Functioning Measures for Social Competence Research

    ERIC Educational Resources Information Center

    Stichter, Janine P.; Christ, Shawn E.; Herzog, Melissa J.; O'Donnell, Rose M.; O'Connor, Karen V.

    2016-01-01

    Numerous research groups have consistently called for increased rigor within the evaluation of social programming to better understand pivotal factors to treatment outcomes. The underwhelming data on the essential features of social competence programs for students with behavior challenges may, in part, be attributed to the manner by which…

  12. Numerical computation of orbits and rigorous verification of existence of snapback repellers.

    PubMed

    Peng, Chen-Chang

    2007-03-01

    In this paper we show how analysis from numerical computation of orbits can be applied to prove the existence of snapback repellers in discrete dynamical systems. That is, we present a computer-assisted method to prove the existence of a snapback repeller of a specific map. The existence of a snapback repeller of a dynamical system implies that it has chaotic behavior [F. R. Marotto, J. Math. Anal. Appl. 63, 199 (1978)]. The method is applied to the logistic map and the discrete predator-prey system.

  13. Numerical parametric studies of spray combustion instability

    NASA Technical Reports Server (NTRS)

    Pindera, M. Z.

    1993-01-01

    A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.

  14. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  15. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  16. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    PubMed

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  17. An Advanced Reservoir Simulator for Tracer Transport in Multicomponent Multiphase Compositional Flow and Applications to the Cranfield CO2 Sequestration Site

    NASA Astrophysics Data System (ADS)

    Moortgat, J.

    2015-12-01

    Reservoir simulators are widely used to constrain uncertainty in the petrophysical properties of subsurface formations by matching the history of injection and production data. However, such measurements may be insufficient to uniquely characterize a reservoir's properties. Monitoring of natural (isotopic) and introduced tracers is a developing technology to further interrogate the subsurface for applications such as enhanced oil recovery from conventional and unconventional resources, and CO2 sequestration. Oak Ridge National Laboratory has been piloting this tracer technology during and following CO2 injection at the Cranfield, Mississippi, CO2 sequestration test site. Two campaigns of multiple perfluorocarbon tracers were injected together with CO2 and monitored at two wells at 68 m and 112 m from the injection site. The tracer data suggest that multiple CO2 flow paths developed towards the monitoring wells, indicative of either channeling through high permeability pathways or of fingering. The results demonstrate that tracers provide an important complement to transient pressure data. Numerical modeling is essential to further explain and interpret the observations. To aid the development of tracer technology, we enhanced a compositional multiphase reservoir simulator to account for tracer transport. Our research simulator uses higher-order finite element (FE) methods that can capture the small-scale onset of fingering on the coarse grids required for field-scale modeling, and allows for unstructured grids and anisotropic heterogeneous permeability fields. Mass transfer between fluid phases and phase behavior are modeled with rigorous equation-of-state based phase-split calculations. We present our tracer simulator and preliminary results related to the Cranfield experiments. Applications to noble gas tracers in unconventional resources are presented by Darrah et al.

  18. Implementation of the full viscoresistive magnetohydrodynamic equations in a nonlinear finite element code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverkort, J.W.; Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven; Blank, H.J. de

    Numerical simulations form an indispensable tool to understand the behavior of a hot plasma that is created inside a tokamak for providing nuclear fusion energy. Various aspects of tokamak plasmas have been successfully studied through the reduced magnetohydrodynamic (MHD) model. The need for more complete modeling through the full MHD equations is addressed here. Our computational method is presented along with measures against possible problems regarding pollution, stability, and regularity. The problem of ensuring continuity of solutions in the center of a polar grid is addressed in the context of a finite element discretization of the full MHD equations. Amore » rigorous and generally applicable solution is proposed here. Useful analytical test cases are devised to verify the correct implementation of the momentum and induction equation, the hyperdiffusive terms, and the accuracy with which highly anisotropic diffusion can be simulated. A striking observation is that highly anisotropic diffusion can be treated with the same order of accuracy as isotropic diffusion, even on non-aligned grids, as long as these grids are generated with sufficient care. This property is shown to be associated with our use of a magnetic vector potential to describe the magnetic field. Several well-known instabilities are simulated to demonstrate the capabilities of the new method. The linear growth rate of an internal kink mode and a tearing mode are benchmarked against the results of a linear MHD code. The evolution of a tearing mode and the resulting magnetic islands are simulated well into the nonlinear regime. The results are compared with predictions from the reduced MHD model. Finally, a simulation of a ballooning mode illustrates the possibility to use our method as an ideal MHD method without the need to add any physical dissipation.« less

  19. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  20. Cyclic density functional theory: A route to the first principles simulation of bending in nanostructures

    NASA Astrophysics Data System (ADS)

    Banerjee, Amartya S.; Suryanarayana, Phanish

    2016-11-01

    We formulate and implement Cyclic Density Functional Theory (Cyclic DFT) - a self-consistent first principles simulation method for nanostructures with cyclic symmetries. Using arguments based on Group Representation Theory, we rigorously demonstrate that the Kohn-Sham eigenvalue problem for such systems can be reduced to a fundamental domain (or cyclic unit cell) augmented with cyclic-Bloch boundary conditions. Analogously, the equations of electrostatics appearing in Kohn-Sham theory can be reduced to the fundamental domain augmented with cyclic boundary conditions. By making use of this symmetry cell reduction, we show that the electronic ground-state energy and the Hellmann-Feynman forces on the atoms can be calculated using quantities defined over the fundamental domain. We develop a symmetry-adapted finite-difference discretization scheme to obtain a fully functional numerical realization of the proposed approach. We verify that our formulation and implementation of Cyclic DFT is both accurate and efficient through selected examples. The connection of cyclic symmetries with uniform bending deformations provides an elegant route to the ab-initio study of bending in nanostructures using Cyclic DFT. As a demonstration of this capability, we simulate the uniform bending of a silicene nanoribbon and obtain its energy-curvature relationship from first principles. A self-consistent ab-initio simulation of this nature is unprecedented and well outside the scope of any other systematic first principles method in existence. Our simulations reveal that the bending stiffness of the silicene nanoribbon is intermediate between that of graphene and molybdenum disulphide - a trend which can be ascribed to the variation in effective thickness of these materials. We describe several future avenues and applications of Cyclic DFT, including its extension to the study of non-uniform bending deformations and its possible use in the study of the nanoscale flexoelectric effect.

  1. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  2. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2008-01-01

    An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  3. Fracture Propagation, Fluid Flow, and Geomechanics of Water-Based Hydraulic Fracturing in Shale Gas Systems and Electromagnetic Geophysical Monitoring of Fluid Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihoon; Um, Evan; Moridis, George

    2014-12-01

    We investigate fracture propagation induced by hydraulic fracturing with water injection, using numerical simulation. For rigorous, full 3D modeling, we employ a numerical method that can model failure resulting from tensile and shear stresses, dynamic nonlinear permeability, leak-off in all directions, and thermo-poro-mechanical effects with the double porosity approach. Our numerical results indicate that fracture propagation is not the same as propagation of the water front, because fracturing is governed by geomechanics, whereas water saturation is determined by fluid flow. At early times, the water saturation front is almost identical to the fracture tip, suggesting that the fracture is mostlymore » filled with injected water. However, at late times, advance of the water front is retarded compared to fracture propagation, yielding a significant gap between the water front and the fracture top, which is filled with reservoir gas. We also find considerable leak-off of water to the reservoir. The inconsistency between the fracture volume and the volume of injected water cannot properly calculate the fracture length, when it is estimated based on the simple assumption that the fracture is fully saturated with injected water. As an example of flow-geomechanical responses, we identify pressure fluctuation under constant water injection, because hydraulic fracturing is itself a set of many failure processes, in which pressure consistently drops when failure occurs, but fluctuation decreases as the fracture length grows. We also study application of electromagnetic (EM) geophysical methods, because these methods are highly sensitive to changes in porosity and pore-fluid properties due to water injection into gas reservoirs. Employing a 3D finite-element EM geophysical simulator, we evaluate the sensitivity of the crosswell EM method for monitoring fluid movements in shaly reservoirs. For this sensitivity evaluation, reservoir models are generated through the coupled flow-geomechanical simulator and are transformed via a rock-physics model into electrical conductivity models. It is shown that anomalous conductivity distribution in the resulting models is closely related to injected water saturation, but not closely related to newly created unsaturated fractures. Our numerical modeling experiments demonstrate that the crosswell EM method can be highly sensitive to conductivity changes that directly indicate the migration pathways of the injected fluid. Accordingly, the EM method can serve as an effective monitoring tool for distribution of injected fluids (i.e., migration pathways) during hydraulic fracturing operations« less

  4. Determining in-situ thermal conductivity of coarse textured materials through numerical analysis of thermal

    NASA Astrophysics Data System (ADS)

    Saito, H.; Hamamoto, S.; Moldrup, P.; Komatsu, T.

    2013-12-01

    Ground source heat pump (GSHP) systems use ground or groundwater as a heat/cooling source, typically by circulating anti-freezing solution inside a vertically installed closed-loop tube known as a U-tube to transfer heat to/from the ground. Since GSHP systems are based on renewable energy and can achieve much higher coefficient of performance (COP) than conventional air source heat pump systems, use of GSHP systems has been rapidly increasing worldwide. However, environmental impacts by GSHP systems including thermal effects on subsurface physical-chemical and microbiological properties have not been fully investigated. To rigorously assess GSHP impact on the subsurface environment, ground thermal properties including thermal conductivity and heat capacity need to be accurately characterized. Ground thermal properties were investigated at two experimental sites at Tokyo University of Agriculture and Technology (TAT) and Saitama University (SA), both located in the Kanto area of Japan. Thermal properties were evaluated both by thermal probe measurements on boring core samples and by performing in-situ Thermal Response Tests (TRT) in 50-80 m deep U-tubes. At both TAT and SU sites, heat-pulse probe measurements gave unrealistic low thermal conductivities for coarse textured materials (dominated by particles > 75 micrometers). Such underestimation can be partly due to poor contact between probe and porous material and partly to markedly decreasing sample water content during drilling, carrying, and storing sandy/gravelly samples. A more reliable approach for estimating in-situ thermal conductivity of coarse textured materials is therefore needed, and may be based on the commonly used TRT test. However, analyses of TRT data is typically based on Kelvin's line source model and provides an average (effective) thermal property for the whole soil profile around the U-tube but not for each geological layer. The main objective of this study was therefore to develop a method for estimating thermal conductivity values of coarse textured layers by numerically analyzing TRT data. A numerical technique combining three-dimensional conductive heat transport and one-dimensional convective heat transport to simulate heat exchange processes between the U-tube and the ground was used. In the numerical simulations, the thermal conductivities for the fine textured layers were kept at the probe-measured values, while the thermal conductivity for the coarse textured layers (constituting around half of the profile depth at both sites) was calibrated. The numerically-based method yielded more reasonable thermal conductivity values for the coarse-textured materials at both TAT and SU sites as compared to the heat pulse probe measurements, while the temperature changes of the heat carry fluid inside the U-tubes were also well simulated.

  5. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  6. Immersed boundary lattice Boltzmann model based on multiple relaxation times

    NASA Astrophysics Data System (ADS)

    Lu, Jianhua; Han, Haifeng; Shi, Baochang; Guo, Zhaoli

    2012-01-01

    As an alterative version of the lattice Boltzmann models, the multiple relaxation time (MRT) lattice Boltzmann model introduces much less numerical boundary slip than the single relaxation time (SRT) lattice Boltzmann model if some special relationship between the relaxation time parameters is chosen. On the other hand, most current versions of the immersed boundary lattice Boltzmann method, which was first introduced by Feng and improved by many other authors, suffer from numerical boundary slip as has been investigated by Le and Zhang. To reduce such a numerical boundary slip, an immerse boundary lattice Boltzmann model based on multiple relaxation times is proposed in this paper. A special formula is given between two relaxation time parameters in the model. A rigorous analysis and the numerical experiments carried out show that the numerical boundary slip reduces dramatically by using the present model compared to the single-relaxation-time-based model.

  7. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  8. Modeling and Numerical Challenges in Eulerian-Lagrangian Computations of Shock-driven Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Diggs, Angela; Balachandar, Sivaramakrishnan

    2015-06-01

    The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.

  9. Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru

    2013-08-01

    Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less

  10. Advantages and Disadvantages of Weighted Grading. Research Brief

    ERIC Educational Resources Information Center

    Walker, Karen

    2004-01-01

    What are the advantages and disadvantages of weighted grading? The primary purpose of weighted grading has been to encourage high school students to take more rigorous courses. This effort is then acknowledged by more weight being given to the grade for a specified class. There are numerous systems of weighted grading cited in the literature from…

  11. Hertzian Dipole Radiation over Isotropic Magnetodielectric Substrates

    DTIC Science & Technology

    2015-03-01

    Analytical and numerical techniques in the Green’s function treatment of microstrip antennas and scatterers. IEE Proceedings. March 1983:130(2). 3...public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report investigates dipole antennas printed on grounded...engineering of thin planar antennas . Since these materials often require complicated constitutive equations to describe their properties rigorously, the

  12. Can High School Assessments Predict Developmental Education Enrollment in New Mexico?

    ERIC Educational Resources Information Center

    Weldon, Tyler L.

    2013-01-01

    Thousands of American's enter postsecondary institutions every year and many are under prepared for college-level work. Subsequently, students enroll in or are placed in remedial courses in preparation for the rigor of college level classes. Numerous studies have looked at the impact of developmental course work on student outcomes, but few focus…

  13. Weathering the Storms: Acknowledging Challenges to Learning in Times of Stress

    ERIC Educational Resources Information Center

    Hubschman, Betty; Lutz, Marilyn; King, Christine; Wang, Jia; Kopp, David

    2006-01-01

    Students and faculty have had numerous disruptions this academic year with Hurricanes Katrina, Rita, and Wilma developing into major stressors. During this innovative session, we will examine some of the challenges and strategies used by faculty to work with students to maintain empathy and academic rigor in times of stress and disruption, and…

  14. Rigor "and" Relevance: Enhancing High School Students' Math Skills through Career and Technical Education

    ERIC Educational Resources Information Center

    Stone, James R., III; Alfeld, Corinne; Pearson, Donna

    2008-01-01

    Numerous high school students, including many who are enrolled in career and technical education (CTE) courses, do not have the math skills necessary for today's high-skill workplace or college entrance requirements. This study tests a model for enhancing mathematics instruction in five high school CTE programs (agriculture, auto technology,…

  15. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  16. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  17. Experimental and Numerical Analysis of the Effects of Curing Time on Tensile Mechanical Properties of Thin Spray-on Liners

    NASA Astrophysics Data System (ADS)

    Guner, D.; Ozturk, H.

    2016-08-01

    The effects of curing time on tensile elastic material properties of thin spray-on liners (TSLs) were investigated in this study. Two different TSL products supplied by two manufacturers were tested comparatively. The "dogbone" tensile test samples that were prepared in laboratory conditions with different curing times (1, 7, 14, 21, and 28 days) were tested based on ASTM standards. It was concluded that longer curing times improves the tensile strength and the Young's Modulus of the TSLs but decreases their elongation at break. Moreover, as an additional conclusion of the testing procedure, it was observed that during the tensile tests, the common malpractice of measuring sample displacement from the grips of the loading machine with a linear variable displacement transducer versus the sample's gauge length had a major impact on modulus and deformation determination of TSLs. To our knowledge, true stress-strain curves were generated for the first time in TSL literature within this study. Numerical analyses of the laboratory tests were also conducted using Particle Flow Code in 2 Dimensions (PFC2D) in an attempt to guide TSL researchers throughout the rigorous PFC simulation process to model support behaviour of TSLs. A scaling coefficient between macro- and micro-properties of PFC was calculated which will help future TSL PFC modellers mimic their TSL behaviours for various tensile loading support scenarios.

  18. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  19. Crystal Growth and Fluid Mechanics Problems in Directional Solidification

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh A.; Baker, Gregory R.; Foster, Michael R.

    2001-01-01

    Our work in directional solidification has been in the following areas: (1) Dynamics of dendrites including rigorous mathematical analysis of the resulting equations; (2) Examination of the near-structurally unstable features of the mathematically related Hele-Shaw dynamics; (3) Numerical studies of steady temperature distribution in a vertical Bridgman device; (4) Numerical study of transient effects in a vertical Bridgman device; (5) Asymptotic treatment of quasi-steady operation of a vertical Bridgman furnace for large Rayleigh numbers and small Biot number in 3D; and (6) Understanding of Mullins-Sererka transition in a Bridgman device with fluid dynamics is accounted for.

  20. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  1. Numerical Inverse Scattering for the Toda Lattice

    NASA Astrophysics Data System (ADS)

    Bilman, Deniz; Trogdon, Thomas

    2017-06-01

    We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.

  2. An efficient numerical procedure for thermohydrodynamic analysis of cavitating bearings

    NASA Technical Reports Server (NTRS)

    Vijayaraghavan, D.

    1995-01-01

    An efficient and accurate numerical procedure to determine the thermo-hydrodynamic performance of cavitating bearings is described. This procedure is based on the earlier development of Elrod for lubricating films, in which the properties across the film thickness are determined at Lobatto points and their distributions are expressed by collocated polynomials. The cavitated regions and their boundaries are rigorously treated. Thermal boundary conditions at the surfaces, including heat dissipation through the metal to the ambient, are incorporated. Numerical examples are presented comparing the predictions using this procedure with earlier theoretical predictions and experimental data. With a few points across the film thickness and across the journal and the bearing in the radial direction, the temperature profile is very well predicted.

  3. Mountain bicycle frame testing as an example of practical implementation of hybrid simulation using RTFEM

    NASA Astrophysics Data System (ADS)

    Mucha, Waldemar; Kuś, Wacław

    2018-01-01

    The paper presents a practical implementation of hybrid simulation using Real Time Finite Element Method (RTFEM). Hybrid simulation is a technique for investigating dynamic material and structural properties of mechanical systems by performing numerical analysis and experiment at the same time. It applies to mechanical systems with elements too difficult or impossible to model numerically. These elements are tested experimentally, while the rest of the system is simulated numerically. Data between the experiment and numerical simulation are exchanged in real time. Authors use Finite Element Method to perform the numerical simulation. The following paper presents the general algorithm for hybrid simulation using RTFEM and possible improvements of the algorithm for computation time reduction developed by the authors. The paper focuses on practical implementation of presented methods, which involves testing of a mountain bicycle frame, where the shock absorber is tested experimentally while the rest of the frame is simulated numerically.

  4. Shall we upgrade one-dimensional secondary settler models used in WWTP simulators? - An assessment of model structure uncertainty and its propagation.

    PubMed

    Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A

    2011-01-01

    In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer/winter sequence. The model prediction in terms of nitrogen removal, solids inventory in the bioreactors and solids retention time as a function of the solids settling behaviour is investigated. It is found that the settler behaviour, simulated by the hyperbolic model, can introduce significant errors into the approximation of the solids retention time and thus solids inventory of the system. We demonstrate that these impacts can potentially cause deterioration of the predictive power of the biokinetic model, evidenced by an evaluation of the system's nitrogen removal efficiency. The convection-dispersion model exhibits superior behaviour, and the use of this type of model thus is highly recommended, especially bearing in mind future challenges, e.g., the explicit representation of uncertainty in WWTP models.

  5. High School Opportunities for STEM: Comparing Inclusive STEM-Focused and Comprehensive High Schools in Two US Cities

    ERIC Educational Resources Information Center

    Eisenhart, Margaret; Weis, Lois; Allen, Carrie D.; Cipollone, Kristin; Stich, Amy; Dominguez, Rachel

    2015-01-01

    In response to numerous calls for more rigorous STEM (science, technology, engineering, and mathematics) education to improve US competitiveness and the job prospects of next-generation workers, especially those from low-income and minority groups, a growing number of schools emphasizing STEM have been established in the US over the past decade.…

  6. A Rigorous Sharp Interface Limit of a Diffuse Interface Model Related to Tumor Growth

    NASA Astrophysics Data System (ADS)

    Rocca, Elisabetta; Scala, Riccardo

    2017-06-01

    In this paper, we study the rigorous sharp interface limit of a diffuse interface model related to the dynamics of tumor growth, when a parameter ɛ, representing the interface thickness between the tumorous and non-tumorous cells, tends to zero. More in particular, we analyze here a gradient-flow-type model arising from a modification of the recently introduced model for tumor growth dynamics in Hawkins-Daruud et al. (Int J Numer Math Biomed Eng 28:3-24, 2011) (cf. also Hilhorst et al. Math Models Methods Appl Sci 25:1011-1043, 2015). Exploiting the techniques related to both gradient flows and gamma convergence, we recover a condition on the interface Γ relating the chemical and double-well potentials, the mean curvature, and the normal velocity.

  7. Nuclear-coupled thermal-hydraulic stability analysis of boiling water reactors

    NASA Astrophysics Data System (ADS)

    Karve, Atul A.

    We have studied the nuclear-coupled thermal-hydraulic stability of boiling water reactors (BWRs) using a model we developed from: the space-time modal neutron kinetics equations based on spatial omega-modes, the equations for two-phase flow in parallel boiling channels, the fuel rod heat conduction equations, and a simple model for the recirculation loop. The model is represented as a dynamical system comprised of time-dependent nonlinear ordinary differential equations, and it is studied using stability analysis, modern bifurcation theory, and numerical simulations. We first determine the stability boundary (SB) in the most relevant parameter plane, the inlet-subcooling-number/external-pressure-drop plane, for a fixed control rod induced external reactivity equal to the 100% rod line value and then transform the SB to the practical power-flow map. Using this SB, we show that the normal operating point at 100% power is very stable, stability of points on the 100% rod line decreases as the flow rate is reduced, and that points are least stable in the low-flow/high-power region. We also determine the SB when the modal kinetics is replaced by simple point reactor kinetics and show that the first harmonic mode has no significant effect on the SB. Later we carry out the relevant numerical simulations where we first show that the Hopf bifurcation, that occurs as a parameter is varied across the SB is subcritical, and that, in the important low-flow/high-power region, growing oscillations can result following small finite perturbations of stable steady-states on the 100% rod line. Hence, a point on the 100% rod line in the low-flow/high-power region, although stable, may nevertheless be a point at which a BWR should not be operated. Numerical simulations are then done to calculate the decay ratios (DRs) and frequencies of oscillations for various points on the 100% rod line. It is determined that the NRC requirement of DR < 0.75-0.8 is not rigorously satisfied in the low-flow/high-power region and hence these points should be avoided during normal startup and shutdown operations. The frequency of oscillation is shown to decrease as the flow rate is reduced and the frequency of 0.5Hz observed in the low-flow/high-power region is consistent with those observed during actual instability incidents. Additional numerical simulations show that in the low-flow/high-power region, for the same initial conditions, the use of point kinetics leads to damped oscillations, whereas the model that includes the modal kinetics equations results in growing nonlinear oscillations. Thus, we show that side-by-side out-of-phase growing power oscillations result due to the very important first harmonic mode effect and that the use of point kinetics, which fails to predict these growing oscillations, leads to dramatically nonconservative results. Finally, the effect of a simple recirculation loop model that we develop is studied by carrying out additional stability analyses and additional numerical simulations. It is shown that the loop has a stabilizing effect on certain points on the 100% rod line for time delays equal to integer multiples of the natural period of oscillation, whereas it has a destabilizing effect for half-integer multiples. However, for more practical time delays, it is determined that the overall effect generally is destabilizing.

  8. The effect of compliant prisms on subduction zone earthquakes and tsunamis

    NASA Astrophysics Data System (ADS)

    Lotto, Gabriel C.; Dunham, Eric M.; Jeppson, Tamara N.; Tobin, Harold J.

    2017-01-01

    Earthquakes generate tsunamis by coseismically deforming the seafloor, and that deformation is largely controlled by the shallow rupture process. Therefore, in order to better understand how earthquakes generate tsunamis, one must consider the material structure and frictional properties of the shallowest part of the subduction zone, where ruptures often encounter compliant sedimentary prisms. Compliant prisms have been associated with enhanced shallow slip, seafloor deformation, and tsunami heights, particularly in the context of tsunami earthquakes. To rigorously quantify the role compliant prisms play in generating tsunamis, we perform a series of numerical simulations that directly couple dynamic rupture on a dipping thrust fault to the elastodynamic response of the Earth and the acoustic response of the ocean. Gravity is included in our simulations in the context of a linearized Eulerian description of the ocean, which allows us to model tsunami generation and propagation, including dispersion and related nonhydrostatic effects. Our simulations span a three-dimensional parameter space of prism size, prism compliance, and sub-prism friction - specifically, the rate-and-state parameter b - a that determines velocity-weakening or velocity-strengthening behavior. We find that compliant prisms generally slow rupture velocity and, for larger prisms, generate tsunamis more efficiently than subduction zones without prisms. In most but not all cases, larger, more compliant prisms cause greater amounts of shallow slip and larger tsunamis. Furthermore, shallow friction is also quite important in determining overall slip; increasing sub-prism b - a enhances slip everywhere along the fault. Counterintuitively, we find that in simulations with large prisms and velocity-strengthening friction at the base of the prism, increasing prism compliance reduces rather than enhances shallow slip and tsunami wave height.

  9. Three-phase compositional modeling of CO2 injection by higher-order finite element methods with CPA equation of state for aqueous phase

    NASA Astrophysics Data System (ADS)

    Moortgat, Joachim; Li, Zhidong; Firoozabadi, Abbas

    2012-12-01

    Most simulators for subsurface flow of water, gas, and oil phases use empirical correlations, such as Henry's law, for the CO2 composition in the aqueous phase, and equations of state (EOS) that do not represent the polar interactions between CO2and water. Widely used simulators are also based on lowest-order finite difference methods and suffer from numerical dispersion and grid sensitivity. They may not capture the viscous and gravitational fingering that can negatively affect hydrocarbon (HC) recovery, or aid carbon sequestration in aquifers. We present a three-phase compositional model based on higher-order finite element methods and incorporate rigorous and efficient three-phase-split computations for either three HC phases or water-oil-gas systems. For HC phases, we use the Peng-Robinson EOS. We allow solubility of CO2in water and adopt a new cubic-plus-association (CPA) EOS, which accounts for cross association between H2O and CO2 molecules, and association between H2O molecules. The CPA-EOS is highly accurate over a broad range of pressures and temperatures. The main novelty of this work is the formulation of a reservoir simulator with new EOS-based unique three-phase-split computations, which satisfy both the equalities of fugacities in all three phases and the global minimum of Gibbs free energy. We provide five examples that demonstrate twice the convergence rate of our method compared with a finite difference approach, and compare with experimental data and other simulators. The examples consider gravitational fingering during CO2sequestration in aquifers, viscous fingering in water-alternating-gas injection, and full compositional modeling of three HC phases.

  10. Rigorous coupled wave analysis of acousto-optics with relativistic considerations.

    PubMed

    Xia, Guoqiang; Zheng, Weijian; Lei, Zhenggang; Zhang, Ruolan

    2015-09-01

    A relativistic analysis of acousto-optics is presented, and a rigorous coupled wave analysis is generalized for the diffraction of the acousto-optical effect. An acoustic wave generates a grating with temporally and spatially modulated permittivity, hindering direct applications of the rigorous coupled wave analysis for the acousto-optical effect. In a reference frame which moves with the acoustic wave, the grating is static, the medium moves, and the coupled wave equations for the static grating may be derived. Floquet's theorem is then applied to cast these equations into an eigenproblem. Using a Lorentz transformation, the electromagnetic fields in the grating region are transformed to the lab frame where the medium is at rest, and relativistic Doppler frequency shifts are introduced into various diffraction orders. In the lab frame, the boundary conditions are considered and the diffraction efficiencies of various orders are determined. This method is rigorous and general, and the plane waves in the resulting expansion satisfy the dispersion relation of the medium and are propagation modes. Properties of various Bragg diffractions are results, rather than preconditions, of this method. Simulations of an acousto-optical tunable filter made by paratellurite, TeO(2), are given as examples.

  11. A simulation-based approach for estimating premining water quality: Red Mountain Creek, Colorado

    USGS Publications Warehouse

    Runkel, Robert L.; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L.

    2007-01-01

    Regulatory agencies are often charged with the task of setting site-specific numeric water quality standards for impaired streams. This task is particularly difficult for streams draining highly mineralized watersheds with past mining activity. Baseline water quality data obtained prior to mining are often non-existent and application of generic water quality standards developed for unmineralized watersheds is suspect given the geology of most watersheds affected by mining. Various approaches have been used to estimate premining conditions, but none of the existing approaches rigorously consider the physical and geochemical processes that ultimately determine instream water quality. An approach based on simulation modeling is therefore proposed herein. The approach utilizes synoptic data that provide spatially-detailed profiles of concentration, streamflow, and constituent load along the study reach. This field data set is used to calibrate a reactive stream transport model that considers the suite of physical and geochemical processes that affect constituent concentrations during instream transport. A key input to the model is the quality and quantity of waters entering the study reach. This input is based on chemical analyses available from synoptic sampling and observed increases in streamflow along the study reach. Given the calibrated model, additional simulations are conducted to estimate premining conditions. In these simulations, the chemistry of mining-affected sources is replaced with the chemistry of waters that are thought to be unaffected by mining (proximal, premining analogues). The resultant simulations provide estimates of premining water quality that reflect both the reduced loads that were present prior to mining and the processes that affect these loads as they are transported downstream. This simulation-based approach is demonstrated using data from Red Mountain Creek, Colorado, a small stream draining a heavily-mined watershed. Model application to the premining problem for Red Mountain Creek is based on limited field reconnaissance and chemical analyses; additional field work and analyses may be needed to develop definitive, quantitative estimates of premining water quality.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    Colorado School of Mines conducted research and training in the development and validation of an advanced CO{sub 2} GS (Geological Sequestration) probabilistic simulation and risk assessment model. CO{sub 2} GS simulation and risk assessment is used to develop advanced numerical simulation models of the subsurface to forecast CO2 behavior and transport; optimize site operational practices; ensure site safety; and refine site monitoring, verification, and accounting efforts. As simulation models are refined with new data, the uncertainty surrounding the identified risks decrease, thereby providing more accurate risk assessment. The models considered the full coupling of multiple physical processes (geomechanical and fluidmore » flow) and describe the effects of stochastic hydro-mechanical (H-M) parameters on the modeling of CO{sub 2} flow and transport in fractured porous rocks. Graduate students were involved in the development and validation of the model that can be used to predict the fate, movement, and storage of CO{sub 2} in subsurface formations, and to evaluate the risk of potential leakage to the atmosphere and underground aquifers. The main major contributions from the project include the development of: 1) an improved procedure to rigorously couple the simulations of hydro-thermomechanical (H-M) processes involved in CO{sub 2} GS; 2) models for the hydro-mechanical behavior of fractured porous rocks with random fracture patterns; and 3) probabilistic methods to account for the effects of stochastic fluid flow and geomechanical properties on flow, transport, storage and leakage associated with CO{sub 2} GS. The research project provided the means to educate and train graduate students in the science and technology of CO{sub 2} GS, with a focus on geologic storage. Specifically, the training included the investigation of an advanced CO{sub 2} GS simulation and risk assessment model that can be used to predict the fate, movement, and storage of CO{sub 2} in underground formations, and the evaluation of the risk of potential CO{sub 2} leakage to the atmosphere and underground aquifers.« less

  13. A Penalty Method for the Numerical Solution of Hamilton-Jacobi-Bellman (HJB) Equations in Finance

    NASA Astrophysics Data System (ADS)

    Witte, J. H.; Reisinger, C.

    2010-09-01

    We present a simple and easy to implement method for the numerical solution of a rather general class of Hamilton-Jacobi-Bellman (HJB) equations. In many cases, the considered problems have only a viscosity solution, to which, fortunately, many intuitive (e.g. finite difference based) discretisations can be shown to converge. However, especially when using fully implicit time stepping schemes with their desireable stability properties, one is still faced with the considerable task of solving the resulting nonlinear discrete system. In this paper, we introduce a penalty method which approximates the nonlinear discrete system to an order of O(1/ρ), where ρ>0 is the penalty parameter, and we show that an iterative scheme can be used to solve the penalised discrete problem in finitely many steps. We include a number of examples from mathematical finance for which the described approach yields a rigorous numerical scheme and present numerical results.

  14. Design and analysis of a fast, two-mirror soft-x-ray microscope

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Wang, C.; Jiang, W.; Jin, L.; Hoover, R. B.

    1992-01-01

    During the past several years, a number of investigators have addressed the design, analysis, fabrication, and testing of spherical Schwarzschild microscopes for soft-x-ray applications using multilayer coatings. Some of these systems have demonstrated diffraction limited resolution for small numerical apertures. Rigorously aplanatic, two-aspherical mirror Head microscopes can provide near diffraction limited resolution for very large numerical apertures. The relationships between the numerical aperture, mirror radii and diameters, magnifications, and total system length for Schwarzschild microscope configurations are summarized. Also, an analysis of the characteristics of the Head-Schwarzschild surfaces will be reported. The numerical surface data predicted by the Head equations were fit by a variety of functions and analyzed by conventional optical design codes. Efforts have been made to determine whether current optical substrate and multilayer coating technologies will permit construction of a very fast Head microscope which can provide resolution approaching that of the wavelength of the incident radiation.

  15. Numerical Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of historical and current numerical aerodynamic simulation (NAS) is given. The capabilities and goals of the Numerical Aerodynamic Simulation Facility are outlined. Emphasis is given to numerical flow visualization and its applications to structural analysis of aircraft and spacecraft bodies. The uses of NAS in computational chemistry, engine design, and galactic evolution are mentioned.

  16. Modelling atmospheric flows with adaptive moving meshes

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Smolarkiewicz, Piotr K.; Dörnbrack, Andreas

    2012-04-01

    An anelastic atmospheric flow solver has been developed that combines semi-implicit non-oscillatory forward-in-time numerics with a solution-adaptive mesh capability. A key feature of the solver is the unification of a mesh adaptation apparatus, based on moving mesh partial differential equations (PDEs), with the rigorous formulation of the governing anelastic PDEs in generalised time-dependent curvilinear coordinates. The solver development includes an enhancement of the flux-form multidimensional positive definite advection transport algorithm (MPDATA) - employed in the integration of the underlying anelastic PDEs - that ensures full compatibility with mass continuity under moving meshes. In addition, to satisfy the geometric conservation law (GCL) tensor identity under general moving meshes, a diagnostic approach is proposed based on the treatment of the GCL as an elliptic problem. The benefits of the solution-adaptive moving mesh technique for the simulation of multiscale atmospheric flows are demonstrated. The developed solver is verified for two idealised flow problems with distinct levels of complexity: passive scalar advection in a prescribed deformational flow, and the life cycle of a large-scale atmospheric baroclinic wave instability showing fine-scale phenomena of fronts and internal gravity waves.

  17. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  18. Attitude output feedback control for rigid spacecraft with finite-time convergence.

    PubMed

    Hu, Qinglei; Niu, Guanglin

    2017-09-01

    The main problem addressed is the quaternion-based attitude stabilization control of rigid spacecraft without angular velocity measurements in the presence of external disturbances and reaction wheel friction as well. As a stepping stone, an angular velocity observer is proposed for the attitude control of a rigid body in the absence of angular velocity measurements. The observer design ensures finite-time convergence of angular velocity state estimation errors irrespective of the control torque or the initial attitude state of the spacecraft. Then, a novel finite-time control law is employed as the controller in which the estimate of the angular velocity is used directly. It is then shown that the observer and the controlled system form a cascaded structure, which allows the application of the finite-time stability theory of cascaded systems to prove the finite-time stability of the closed-loop system. A rigorous analysis of the proposed formulation is provided and numerical simulation studies are presented to help illustrate the effectiveness of the angular-velocity observer for rigid spacecraft attitude control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Getting a grip on the transverse motion in a Zeeman decelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dulitz, Katrin; Softley, Timothy P., E-mail: tim.softley@chem.ox.ac.uk; Motsch, Michael

    2014-03-14

    Zeeman deceleration is an experimental technique in which inhomogeneous, time-dependent magnetic fields generated inside an array of solenoid coils are used to manipulate the velocity of a supersonic beam. A 12-stage Zeeman decelerator has been built and characterized using hydrogen atoms as a test system. The instrument has several original features including the possibility to replace each deceleration coil individually. In this article, we give a detailed description of the experimental setup, and illustrate its performance. We demonstrate that the overall acceptance in a Zeeman decelerator can be significantly increased with only minor changes to the setup itself. This ismore » achieved by applying a rather low, anti-parallel magnetic field in one of the solenoid coils that forms a temporally varying quadrupole field, and improves particle confinement in the transverse direction. The results are reproduced by three-dimensional numerical particle trajectory simulations thus allowing for a rigorous analysis of the experimental data. The findings suggest the use of a modified coil configuration to improve transverse focusing during the deceleration process.« less

  20. Light Controlling at Subwavelength Scales in Nanophotonic Systems: Physics and Applications

    NASA Astrophysics Data System (ADS)

    Shen, Yuecheng

    The capability of controlling light at scales that are much smaller than the operating wave-length enables new optical functionalities, and opens up a wide range of applications. Such a capability is out of the realm of conventional optical approaches. This dissertation aims to explore the light-matter interactions at nanometer scale, and to investigate the novel scien-tific and industrial applications. In particular, we will explain how to detect nanoparticles using an ultra-sensitive nano-sensor; we will also describe a photonic diode which gener-ates a unidirectional flow of single photons; Moreover, in an one-dimensional waveguide QED system where the fermionic degree of freedom is present, we will show that strong photon-photon interactions can be generated through scattering means, leading to photonic bunching and anti-bunching with various applications. Finally, we will introduce a mecha-nism to achieve super-resolution to discern fine features that are orders of magnitude smaller than the illuminating wavelength. These research projects incorporate recent advances in quantum nanophotonics, nanotechnologies, imaging reconstruction techniques, and rigorous numerical simulations.

  1. Transmission and reflection of terahertz plasmons in two-dimensional plasmonic devices

    DOE PAGES

    Sydoruk, Oleksiy; Choonee, Kaushal; Dyer, Gregory Conrad

    2015-03-10

    We found that plasmons in two-dimensional semiconductor devices will be reflected by discontinuities, notably, junctions between gated and non-gated electron channels. The transmitted and reflected plasmons can form spatially- and frequency-varying signals, and their understanding is important for the design of terahertz detectors, oscillators, and plasmonic crystals. Using mode decomposition, we studied terahertz plasmons incident on a junction between a gated and a nongated channel. The plasmon reflection and transmission coefficients were found numerically and analytically and studied between 0.3 and 1 THz for a range of electron densities. At higher frequencies, we could describe the plasmons by a simplifiedmore » model of channels in homogeneous dielectrics, for which the analytical approximations were accurate. At low frequencies, however, the full geometry and mode spectrum had to be taken into account. Moreover, the results agreed with simulations by the finite-element method. As a result, mode decomposition thus proved to be a powerful method for plasmonic devices, combining the rigor of complete solutions of Maxwell's equations with the convenience of analytical expressions.« less

  2. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  3. Floquet–Magnus theory and generic transient dynamics in periodically driven many-body quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuwahara, Tomotaka, E-mail: tomotaka.phys@gmail.com; WPI, Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577; Mori, Takashi

    2016-04-15

    This work explores a fundamental dynamical structure for a wide range of many-body quantum systems under periodic driving. Generically, in the thermodynamic limit, such systems are known to heat up to infinite temperature states in the long-time limit irrespective of dynamical details, which kills all the specific properties of the system. In the present study, instead of considering infinitely long-time scale, we aim to provide a general framework to understand the long but finite time behavior, namely the transient dynamics. In our analysis, we focus on the Floquet–Magnus (FM) expansion that gives a formal expression of the effective Hamiltonian onmore » the system. Although in general the full series expansion is not convergent in the thermodynamics limit, we give a clear relationship between the FM expansion and the transient dynamics. More precisely, we rigorously show that a truncated version of the FM expansion accurately describes the exact dynamics for a certain time-scale. Our theory reveals an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed. We discuss several dynamical phenomena, such as the effect of small integrability breaking, efficient numerical simulation of periodically driven systems, dynamical localization and thermalization. Especially on thermalization, we discuss a generic scenario on the prethermalization phenomenon in periodically driven systems. -- Highlights: •A general framework to describe transient dynamics for periodically driven systems. •The theory is applicable to generic quantum many-body systems including long-range interacting systems. •Physical meaning of the truncation of the Floquet–Magnus expansion is rigorously established. •New mechanism of the prethermalization is proposed. •Revealing an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed.« less

  4. A Rigorous Solution for Finite-State Inflow throughout the Flowfield

    NASA Astrophysics Data System (ADS)

    Fei, Zhongyang

    In this research, the Hseih/Duffy model is extended to all three velocity components of inflow across the rotor disk in a mathematically rigorous way so that it can be used to calculate the inflow below the rotor disk plane. This establishes a complete dynamic inflow model for the entire flow field with finite state method. The derivation is for the case of general skewed angle. The cost of the new method is that one needs to compute the co-states of the inflow equations in the upper hemisphere along with the normal states. Numerical comparisons with exact solutions for the z-component of flow in axial and skewed angle flow demonstrate excellent correlation with closed-form solutions. The simulations also illustrate that the model is valid at both the frequency domain and the time domain. Meanwhile, in order to accelerate the convergence, an optimization of even terms is used to minimize the error in the axial component of the induced velocity in the on and on/off disk region. A novel method for calculating associate Legendre function of the second kind is also developed to solve the problem of divergence of Q¯mn (ieta) for large eta with the iterative method. An application of the new model is also conducted to compute inflow in the wake of a rotor with a finite number of blades. The velocities are plotted at different distances from the rotor disk and are compared with the Glauert prediction for axial flow and wake swirl. In the finite-state model, the angular momentum does not jump instantaneously across the disk, but it does transition rapidly across the disk to correct Glauert value.

  5. RCWA and FDTD modeling of light emission from internally structured OLEDs.

    PubMed

    Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan

    2014-05-05

    We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.

  6. Periodic Time-Domain Nonlocal Nonreflecting Boundary Conditions for Duct Acoustics

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Zorumski, William E.

    1996-01-01

    Periodic time-domain boundary conditions are formulated for direct numerical simulation of acoustic waves in ducts without flow. Well-developed frequency-domain boundary conditions are transformed into the time domain. The formulation is presented here in one space dimension and time; however, this formulation has an advantage in that its extension to variable-area, higher dimensional, and acoustically treated ducts is rigorous and straightforward. The boundary condition simulates a nonreflecting wave field in an infinite uniform duct and is implemented by impulse-response operators that are applied at the boundary of the computational domain. These operators are generated by convolution integrals of the corresponding frequency-domain operators. The acoustic solution is obtained by advancing the Euler equations to a periodic state with the MacCormack scheme. The MacCormack scheme utilizes the boundary condition to limit the computational space and preserve the radiation boundary condition. The success of the boundary condition is attributed to the fact that it is nonreflecting to periodic acoustic waves. In addition, transient waves can pass rapidly out of the solution domain. The boundary condition is tested for a pure tone and a multitone source in a linear setting. The effects of various initial conditions are assessed. Computational solutions with the boundary condition are consistent with the known solutions for nonreflecting wave fields in an infinite uniform duct.

  7. A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.

    2017-12-01

    A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.

  8. Physics of the zero- photonic gap: fundamentals and latest developments

    NASA Astrophysics Data System (ADS)

    Zhou, Lei; Song, Zhengyong; Huang, Xueqin; Chan, C. T.

    2012-12-01

    A short overview is presented on the research works related to the zero- gap, which appears as the volume-averaged refraction index vanishes in photonic structures containing both positive and negative-index materials. After introducing the basic concept of the zero- gap based on both rigorous mathematics and numerical simulations, the unique properties of such a band gap are discussed, including its robustness against weak disorder, wide-incidence-angle operation and scaling invariance, which do not belong to a conventional Bragg gap. We then describe the simulation and experimental verifications on the zero- gap and its extraordinary properties in different frequency domains. After that, the unusual photonic and physical effects discovered based on the zero- gap and their potential applications are reviewed, including beam manipulations and nonlinear effects. Before concluding this review, several interesting ideas inspired from the zero- gap works will be introduced, including the zero-phase gaps, zero-permittivity and zero-permeability gaps, complete band gaps, and zero-refraction-index materials with Dirac-Cone dispersion.

  9. Coordination and Data Management of the International Arctic Buoy Programme (IABP)

    DTIC Science & Technology

    2002-09-30

    for forcing, validation and assimilation into numerical climate models , and for forecasting weather and ice conditions. TRANSITIONS Using IABP ...Coordination and Data Management of the International Arctic Buoy Programme ( IABP ) Ignatius G. Rigor 1013 NE 40th Street Polar Science Center...analyzed geophysical fields. APPROACH The IABP is a collaboration between 25 different institutions from 8 different countries, which work together

  10. Impact of insects on multiple-use values of north-central forests: an experimental rating scheme.

    Treesearch

    Norton D. Addy; Harold O. Batzer; William J. Mattson; William E. Miller

    1971-01-01

    Ranking or assigning priorities to problems is an essential step in research problem selection. Up to now, no rigorous basis for ranking forest insects has been available. We evaluate and rank forest insects with a systematic numerical scheme that considers insect impact on the multiple-use values of timber, wildlife, recreation, and water. The result is a better...

  11. SMD-based numerical stochastic perturbation theory

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Lüscher, Martin

    2017-05-01

    The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.

  12. On the Buckling of Imperfect Anisotropic Shells with Elastic Edge Supports Under Combined Loading Part I:. Pt. 1; Theory and Numerical Analysis

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hol, J. M. A. M.; deVries, J.

    1998-01-01

    A rigorous solution is presented for the case of stiffened anisotropic cylindrical shells with general imperfections under combined loading, where the edge supports are provided by symmetrical or unsymmetrical elastic rings. The circumferential dependence is eliminated by a truncated Fourier series. The resulting nonlinear 2-point boundary value problem is solved numerically via the "Parallel Shooting Method". The changing deformation patterns resulting from the different degrees of interaction between the given initial imperfections and the specified end rings are displayed. Recommendations are made as to the minimum ring stiffnesses required for optimal load carrying configurations.

  13. Charge carrier concentration dependence of encounter-limited bimolecular recombination in phase-separated organic semiconductor blends

    NASA Astrophysics Data System (ADS)

    Heiber, Michael C.; Nguyen, Thuc-Quyen; Deibel, Carsten

    2016-05-01

    Understanding how the complex intermolecular configurations and nanostructure present in organic semiconductor donor-acceptor blends impacts charge carrier motion, interactions, and recombination behavior is a critical fundamental issue with a particularly major impact on organic photovoltaic applications. In this study, kinetic Monte Carlo (KMC) simulations are used to numerically quantify the complex bimolecular charge carrier recombination behavior in idealized phase-separated blends. Recent KMC simulations have identified how the encounter-limited bimolecular recombination rate in these blends deviates from the often used Langevin model and have been used to construct the new power mean mobility model. Here, we make a challenging but crucial expansion to this work by determining the charge carrier concentration dependence of the encounter-limited bimolecular recombination coefficient. In doing so, we find that an accurate treatment of the long-range electrostatic interactions between charge carriers is critical, and we further argue that many previous KMC simulation studies have used a Coulomb cutoff radius that is too small, which causes a significant overestimation of the recombination rate. To shed more light on this issue, we determine the minimum cutoff radius required to reach an accuracy of less than ±10 % as a function of the domain size and the charge carrier concentration and then use this knowledge to accurately quantify the charge carrier concentration dependence of the recombination rate. Using these rigorous methods, we finally show that the parameters of the power mean mobility model are determined by a newly identified dimensionless ratio of the domain size to the average charge carrier separation distance.

  14. A large column analog experiment of stable isotope variations during reactive transport: I. A comprehensive model of sulfur cycling and δ34S fractionation

    NASA Astrophysics Data System (ADS)

    Druhan, Jennifer L.; Steefel, Carl I.; Conrad, Mark E.; DePaolo, Donald J.

    2014-01-01

    This study demonstrates a mechanistic incorporation of the stable isotopes of sulfur within the CrunchFlow reactive transport code to model the range of microbially-mediated redox processes affecting kinetic isotope fractionation. Previous numerical models of microbially mediated sulfate reduction using Monod-type rate expressions have lacked rigorous coupling of individual sulfur isotopologue rates, with the result that they cannot accurately simulate sulfur isotope fractionation over a wide range of substrate concentrations using a constant fractionation factor. Here, we derive a modified version of the dual-Monod or Michaelis-Menten formulation (Maggi and Riley, 2009, 2010) that successfully captures the behavior of the 32S and 34S isotopes over a broad range from high sulfate and organic carbon availability to substrate limitation using a constant fractionation factor. The new model developments are used to simulate a large-scale column study designed to replicate field scale conditions of an organic carbon (acetate) amended biostimulation experiment at the Old Rifle site in western Colorado. Results demonstrate an initial period of iron reduction that transitions to sulfate reduction, in agreement with field-scale behavior observed at the Old Rifle site. At the height of sulfate reduction, effluent sulfate concentrations decreased to 0.5 mM from an influent value of 8.8 mM over the 100 cm flow path, and thus were enriched in sulfate δ34S from 6.3‰ to 39.5‰. The reactive transport model accurately reproduced the measured enrichment in δ34S of both the reactant (sulfate) and product (sulfide) species of the reduction reaction using a single fractionation factor of 0.987 obtained independently from field-scale measurements. The model also accurately simulated the accumulation and δ34S signature of solid phase elemental sulfur over the duration of the experiment, providing a new tool to predict the isotopic signatures associated with reduced mineral pools. To our knowledge, this is the first rigorous treatment of sulfur isotope fractionation subject to Monod kinetics in a mechanistic reactive transport model that considers the isotopic spatial distribution of both dissolved and solid phase sulfur species during microbially-mediated sulfate reduction. describe the design and results of the large-scale column experiment; demonstrate incorporation of the stable isotopes of sulfur in a dual-Monod kinetic expression such that fractionation is accurately modeled at both high and low substrate availability; verify accurate simulation of the chemical and isotopic gradients in reactant and product sulfur species using a kinetic fractionation factor obtained from field-scale analysis (Druhan et al., 2012); utilize the model to predict the final δ34S values of secondary sulfur minerals accumulated in the sediment over the course of the experiment. The development of rigorous isotope-specific Monod-type rate expressions are presented here in application to sulfur cycling during amended biostimulation, but are readily applicable to a variety of stable isotope systems associated with both steady state and transient biogenic redox environments. In other words, the association of this model with a uranium remediation experiment does not limit its applicability to more general redox systems. Furthermore, the ability of this model treatment to predict the isotopic composition of secondary minerals accumulated as a result of fractionating processes (item 4) offers an important means of interpreting solid phase isotopic compositions and tracking long-term stability of precipitates.

  15. Numerical and Experimental Approaches Toward Understanding Lava Flow Heat Transfer

    NASA Astrophysics Data System (ADS)

    Rumpf, M.; Fagents, S. A.; Hamilton, C.; Crawford, I. A.

    2013-12-01

    We have performed numerical modeling and experimental studies to quantify the heat transfer from a lava flow into an underlying particulate substrate. This project was initially motivated by a desire to understand the transfer of heat from a lava flow into the lunar regolith. Ancient regolith deposits that have been protected by a lava flow may contain ancient solar wind, solar flare, and galactic cosmic ray products that can give insight into the history of our solar system, provided the records were not heated and destroyed by the overlying lava flow. In addition, lava-substrate interaction is an important aspect of lava fluid dynamics that requires consideration in lava emplacement models Our numerical model determines the depth to which the heat pulse will penetrate beneath a lava flow into the underlying substrate. Rigorous treatment of the temperature dependence of lava and substrate thermal conductivity and specific heat capacity, density, and latent heat release are imperative to an accurate model. Experiments were conducted to verify the numerical model. Experimental containers with interior dimensions of 20 x 20 x 25 cm were constructed from 1 inch thick calcium silicate sheeting. For initial experiments, boxes were packed with lunar regolith simulant (GSC-1) to a depth of 15 cm with thermocouples embedded at regular intervals. Basalt collected at Kilauea Volcano, HI, was melted in a gas forge and poured directly onto the simulant. Initial lava temperatures ranged from ~1200 to 1300 °C. The system was allowed to cool while internal temperatures were monitored by a thermocouple array and external temperatures were monitored by a Forward Looking Infrared (FLIR) video camera. Numerical simulations of the experiments elucidate the details of lava latent heat release and constrain the temperature-dependence of the thermal conductivity of the particulate substrate. The temperature-dependence of thermal conductivity of particulate material is not well known, especially at high temperatures. It is important to have this property well constrained as substrate thermal conductivity is the greatest influence on the rate of lava-substrate heat transfer. At Kilauea and Mauna Loa Volcanoes, Hawaii, and other volcanoes that threaten communities, lava may erupt over a variety of substrate materials including cool lava flows, volcanic tephra, soils, sand, and concrete. The composition, moisture, organic content, porosity, and grain size of the substrate dictate the thermophysical properties, thus affecting the transfer of heat from the lava flow into the substrate and flow mobility. Particulate substrate materials act as insulators, subduing the rate of heat transfer from the flow core. Therefore, lava that flows over a particulate substrate will maintain higher core temperatures over a longer period, enhancing flow mobility and increasing the duration and aerial coverage of the resulting flow. Lava flow prediction models should include substrate specification with temperature dependent material property definitions for an accurate understanding of flow hazards.

  16. Machine learning in the string landscape

    NASA Astrophysics Data System (ADS)

    Carifio, Jonathan; Halverson, James; Krioukov, Dmitri; Nelson, Brent D.

    2017-09-01

    We utilize machine learning to study the string landscape. Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. A decision tree accurately predicts the number of weak Fano toric threefolds arising from reflexive polytopes, each of which determines a smooth F-theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of 4/3× 2.96× {10}^{755} F-theory compactifications. Logistic regression generates a new conjecture for when E 6 arises in the large ensemble of F-theory compactifications, which is then rigorously proven. This result may be relevant for the appearance of visible sectors in the ensemble. Through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; McPherson, Brian J.; Grigg, Reid B.

    Numerical simulation is an invaluable analytical tool for scientists and engineers in making predictions about of the fate of carbon dioxide injected into deep geologic formations for long-term storage. Current numerical simulators for assessing storage in deep saline formations have capabilities for modeling strongly coupled processes involving multifluid flow, heat transfer, chemistry, and rock mechanics in geologic media. Except for moderate pressure conditions, numerical simulators for deep saline formations only require the tracking of two immiscible phases and a limited number of phase components, beyond those comprising the geochemical reactive system. The requirements for numerically simulating the utilization and storagemore » of carbon dioxide in partially depleted petroleum reservoirs are more numerous than those for deep saline formations. The minimum number of immiscible phases increases to three, the number of phase components may easily increase fourfold, and the coupled processes of heat transfer, geochemistry, and geomechanics remain. Public and scientific confidence in the ability of numerical simulators used for carbon dioxide sequestration in deep saline formations has advanced via a natural progression of the simulators being proven against benchmark problems, code comparisons, laboratory-scale experiments, pilot-scale injections, and commercial-scale injections. This paper describes a new numerical simulator for the scientific investigation of carbon dioxide utilization and storage in partially depleted petroleum reservoirs, with an emphasis on its unique features for scientific investigations; and documents the numerical simulation of the utilization of carbon dioxide for enhanced oil recovery in the western section of the Farnsworth Unit and represents an early stage in the progression of numerical simulators for carbon utilization and storage in depleted oil reservoirs.« less

  18. A New Numerical Simulation technology of Multistage Fracturing in Horizontal Well

    NASA Astrophysics Data System (ADS)

    Cheng, Ning; Kang, Kaifeng; Li, Jianming; Liu, Tao; Ding, Kun

    2017-11-01

    Horizontal multi-stage fracturing is recognized the effective development technology of unconventional oil resources. Geological mechanics in the numerical simulation of hydraulic fracturing technology occupies very important position, compared with the conventional numerical simulation technology, because of considering the influence of geological mechanics. New numerical simulation of hydraulic fracturing can more effectively optimize the design of fracturing and evaluate the production after fracturing. This paper studies is based on the three-dimensional stress and rock physics parameters model, using the latest fluid-solid coupling numerical simulation technology to engrave the extension process of fracture and describes the change of stress field in fracturing process, finally predict the production situation.

  19. Improved key-rate bounds for practical decoy-state quantum-key-distribution systems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng

    2017-01-01

    The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.

  20. Extreme Response Style: Which Model Is Best?

    ERIC Educational Resources Information Center

    Leventhal, Brian

    2017-01-01

    More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…

  1. Characterization of the mechanical properties of a new grade of ultra high molecular weight polyethylene and modeling with the viscoplasticity based on overstress.

    PubMed

    Khan, Fazeel; Yeakle, Colin; Gomaa, Said

    2012-02-01

    Enhancements to the service life and performance of orthopedic implants used in total knee and hip replacement procedures can be achieved through optimization of design and the development of superior biocompatible polymeric materials. The introduction of a new or modified polymer must, naturally, be preceded by a rigorous testing program. This paper presents the assessment of the mechanical properties of a new filled grade of ultra high molecular weight polyethylene (UHMWPE) designated AOX(TM) and developed by DePuy Orthopaedics Inc. The deformation behavior was investigated through a series of tensile and compressive tests including strain rate sensitivity, creep, relaxation, and recovery. The polymer was found to exhibit rate-reversal behavior for certain loading histories: strain rate during creep with a compressive stress can be negative, positive, or change between the two during a test. Analogous behavior occurs during relaxation as well. This behavior lies beyond the realm of most numerical models used to computationally investigate and improve part geometry through finite element analysis of components. To address this shortcoming, the viscoplasticity theory based on overstress (VBO) has been suitably modified to capture these trends. VBO is a state variable based model in a differential formulation. Numerical simulation and prediction of all of the aforementioned tests, including good reproduction of the rate reversal behavior, is presented in this study. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Semi-physical Simulation of the Airborne InSAR based on Rigorous Geometric Model and Real Navigation Data

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; yuquan, Liu; Xijuan, Yue; Yinghui, Zhao

    2014-03-01

    Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc.

  3. Two Novel Methods and Multi-Mode Periodic Solutions for the Fermi-Pasta-Ulam Model

    NASA Astrophysics Data System (ADS)

    Arioli, Gianni; Koch, Hans; Terracini, Susanna

    2005-04-01

    We introduce two novel methods for studying periodic solutions of the FPU β-model, both numerically and rigorously. One is a variational approach, based on the dual formulation of the problem, and the other involves computer-assisted proofs. These methods are used e.g. to construct a new type of solutions, whose energy is spread among several modes, associated with closely spaced resonances.

  4. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  5. Augmented assessment as a means to augmented reality.

    PubMed

    Bergeron, Bryan

    2006-01-01

    Rigorous scientific assessment of educational technologies typically lags behind the availability of the technologies by years because of the lack of validated instruments and benchmarks. Even when the appropriate assessment instruments are available, they may not be applied because of time and monetary constraints. Work in augmented reality, instrumented mannequins, serious gaming, and similar promising educational technologies that haven't undergone timely, rigorous evaluation, highlights the need for assessment methodologies that address the limitations of traditional approaches. The most promising augmented assessment solutions incorporate elements of rapid prototyping used in the software industry, simulation-based assessment techniques modeled after methods used in bioinformatics, and object-oriented analysis methods borrowed from object oriented programming.

  6. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  7. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  8. Rigorous diffraction analysis using geometrical theory of diffraction for future mask technology

    NASA Astrophysics Data System (ADS)

    Chua, Gek S.; Tay, Cho J.; Quan, Chenggen; Lin, Qunying

    2004-05-01

    Advanced lithographic techniques such as phase shift masks (PSM) and optical proximity correction (OPC) result in a more complex mask design and technology. In contrast to the binary masks, which have only transparent and nontransparent regions, phase shift masks also take into consideration transparent features with a different optical thickness and a modified phase of the transmitted light. PSM are well-known to show prominent diffraction effects, which cannot be described by the assumption of an infinitely thin mask (Kirchhoff approach) that is used in many commercial photolithography simulators. A correct prediction of sidelobe printability, process windows and linearity of OPC masks require the application of rigorous diffraction theory. The problem of aerial image intensity imbalance through focus with alternating Phase Shift Masks (altPSMs) is performed and compared between a time-domain finite-difference (TDFD) algorithm (TEMPEST) and Geometrical theory of diffraction (GTD). Using GTD, with the solution to the canonical problems, we obtained a relationship between the edge on the mask and the disturbance in image space. The main interest is to develop useful formulations that can be readily applied to solve rigorous diffraction for future mask technology. Analysis of rigorous diffraction effects for altPSMs using GTD approach will be discussed.

  9. Numerical simulation of the processes in the normal incidence tube for high acoustic pressure levels

    NASA Astrophysics Data System (ADS)

    Fedotov, E. S.; Khramtsov, I. V.; Kustov, O. Yu.

    2016-10-01

    Numerical simulation of the acoustic processes in an impedance tube at high levels of acoustic pressure is a way to solve a problem of noise suppressing by liners. These studies used liner specimen that is one cylindrical Helmholtz resonator. The evaluation of the real and imaginary parts of the liner acoustic impedance and sound absorption coefficient was performed for sound pressure levels of 130, 140 and 150 dB. The numerical simulation used experimental data having been obtained on the impedance tube with normal incidence waves. At the first stage of the numerical simulation it was used the linearized Navier-Stokes equations, which describe well the imaginary part of the liner impedance whatever the sound pressure level. These equations were solved by finite element method in COMSOL Multiphysics program in axisymmetric formulation. At the second stage, the complete Navier-Stokes equations were solved by direct numerical simulation in ANSYS CFX in axisymmetric formulation. As the result, the acceptable agreement between numerical simulation and experiment was obtained.

  10. Black Holes, Gravitational Waves, and LISA

    NASA Technical Reports Server (NTRS)

    Baker, John

    2009-01-01

    Binary black hole mergers are central to many key science objectives of the Laser Interferometer Space Antenna (LISA). For many systems the strongest part of the signal is only understood by numerical simulations. Gravitational wave emissions are understood by simulations of vacuum General Relativity (GR). I discuss numerical simulation results from the perspective of LISA's needs, with indications of work that remains to be done. Some exciting scientific opportunities associated with LISA observations would be greatly enhanced if prompt electromagnetic signature could be associated. I discuss simulations to explore this possibility. Numerical simulations are important now for clarifying LISA's science potential and planning the mission. We also consider how numerical simulations might be applied at the time of LISA's operation.

  11. Turing patterns in parabolic systems of conservation laws and numerically observed stability of periodic waves

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Jung, Soyeun; Zumbrun, Kevin

    2018-03-01

    Turing patterns on unbounded domains have been widely studied in systems of reaction-diffusion equations. However, up to now, they have not been studied for systems of conservation laws. Here, we (i) derive conditions for Turing instability in conservation laws and (ii) use these conditions to find families of periodic solutions bifurcating from uniform states, numerically continuing these families into the large-amplitude regime. For the examples studied, numerical stability analysis suggests that stable periodic waves can emerge either from supercritical Turing bifurcations or, via secondary bifurcation as amplitude is increased, from subcritical Turing bifurcations. This answers in the affirmative a question of Oh-Zumbrun whether stable periodic solutions of conservation laws can occur. Determination of a full small-amplitude stability diagram - specifically, determination of rigorous Eckhaus-type stability conditions - remains an interesting open problem.

  12. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  13. Numerical Analysis of Constrained Dynamical Systems, with Applications to Dynamic Contact of Solids, Nonlinear Elastodynamics and Fluid-Structure Interactions

    DTIC Science & Technology

    2000-12-01

    Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III

  14. Automating Nuclear-Safety-Related SQA Procedures with Custom Applications

    DOE PAGES

    Freels, James D.

    2016-01-01

    Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.

  15. Agricultural model intercomparison and improvement project: Overview of model intercomparisons

    USDA-ARS?s Scientific Manuscript database

    Improvement of crop simulation models to better estimate growth and yield is one of the objectives of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The overall goal of AgMIP is to provide an assessment of crop model through rigorous intercomparisons and evaluate future clim...

  16. Applying the Bootstrap to Taxometric Analysis: Generating Empirical Sampling Distributions to Help Interpret Results

    ERIC Educational Resources Information Center

    Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati

    2007-01-01

    Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…

  17. Atomistic simulations of ultra-short pulse laser ablation of aluminum: validity of the Lambert-Beer law

    NASA Astrophysics Data System (ADS)

    Eisfeld, Eugen; Roth, Johannes

    2018-05-01

    Based on hybrid molecular dynamics/two-temperature simulations, we study the validity of the application of Lambert-Beer's law, which is conveniently used in various modeling approaches of ultra-short pulse laser ablation of metals. The method is compared to a more rigorous treatment, which involves solving the Helmholtz wave equation for different pulse durations ranging from 100 fs to 5 ps and a wavelength of 800 nm. Our simulations show a growing agreement with increasing pulse durations, and we provide appropriate optical parameters for all investigated pulse durations.

  18. Contributions to the Characterization and Mitigation of Rotorcraft Brownout

    NASA Astrophysics Data System (ADS)

    Tritschler, John Kirwin

    Rotorcraft brownout, the condition in which the flow field of a rotorcraft mobilizes sediment from the ground to generate a cloud that obscures the pilot's field of view, continues to be a significant hazard to civil and military rotorcraft operations. This dissertation presents methodologies for: (i) the systematic mitigation of rotorcraft brownout through operational and design strategies and (ii) the quantitative characterization of the visual degradation caused by a brownout cloud. In Part I of the dissertation, brownout mitigation strategies are developed through simulation-based brownout studies that are mathematically formulated within a numerical optimization framework. Two optimization studies are presented. The first study involves the determination of approach-to-landing maneuvers that result in reduced brownout severity. The second study presents a potential methodology for the design of helicopter rotors with improved brownout characteristics. The results of both studies indicate that the fundamental mechanisms underlying brownout mitigation are aerodynamic in nature, and the evolution of a ground vortex ahead of the rotor disk is seen to be a key element in the development of a brownout cloud. In Part II of the dissertation, brownout cloud characterizations are based upon the Modulation Transfer Function (MTF), a metric commonly used in the optics community for the characterization of imaging systems. The use of the MTF in experimentation is examined first, and the application of MTF calculation and interpretation methods to actual flight test data is described. The potential for predicting the MTF from numerical simulations is examined second, and an initial methodology is presented for the prediction of the MTF of a brownout cloud. Results from the experimental and analytical studies rigorously quantify the intuitively-known facts that the visual degradation caused by brownout is a space and time-dependent phenomenon, and that high spatial frequency features, i.e., fine-grained detail, are obscured before low spatial frequency features, i.e., large objects. As such, the MTF is a metric that is amenable to Handling Qualities (HQ) analyses.

  19. Crack propagation and arrest in CFRP materials with strain softening regions

    NASA Astrophysics Data System (ADS)

    Dilligan, Matthew Anthony

    Understanding the growth and arrest of cracks in composite materials is critical for their effective utilization in fatigue-sensitive and damage susceptible applications such as primary aircraft structures. Local tailoring of the laminate stack to provide crack arrest capacity intermediate to major structural components has been investigated and demonstrated since some of the earliest efforts in composite aerostructural design, but to date no rigorous model of the crack arrest mechanism has been developed to allow effective sizing of these features. To address this shortcoming, the previous work in the field is reviewed, with particular attention to the analysis methodologies proposed for similar arrest features. The damage and arrest processes active in such features are investigated, and various models of these processes are discussed and evaluated. Governing equations are derived based on a proposed mechanistic model of the crack arrest process. The derived governing equations are implemented in a numerical model, and a series of simulations are performed to ascertain the general characteristics of the proposed model and allow qualitative comparison to existing experimental results. The sensitivity of the model and the arrest process to various parameters is investigated, and preliminary conclusions regarding the optimal feature configuration are developed. To address deficiencies in the available material and experimental data, a series of coupon tests are developed and conducted covering a range of arrest zone configurations. Test results are discussed and analyzed, with a particular focus on identification of the proposed failure and arrest mechanisms. Utilizing the experimentally derived material properties, the tests are reproduced with both the developed numerical tool as well as a FEA-based implementation of the arrest model. Correlation between the simulated and experimental results is analyzed, and future avenues of investigation are identified. Utilizing the developed model, a sensitivity study is conducted to assess the current proposed arrest configuration. Optimum distribution and sizing of the arrest zones is investigated, and general design guidelines are developed.

  20. Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks

    PubMed Central

    Jiao, Yang; Torquato, Salvatore

    2012-01-01

    Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739

  1. Stem cell stratagems in alternative medicine.

    PubMed

    Sipp, Douglas

    2011-05-01

    Stem cell research has attracted an extraordinary amount of attention and expectation due to its potential for applications in the treatment of numerous medical conditions. These exciting clinical prospects have generated widespread support from both the public and private sectors, and numerous preclinical studies and rigorous clinical trials have already been initiated. Recent years, however, have also seen alarming growth in the number and variety of claims of clinical uses of notional 'stem cells' that have not been adequately tested for safety and/or efficacy. In this article, I will survey the contours of the stem cell industry as practiced by alternative medicine providers, and highlight points of commonality in their strategies for marketing.

  2. Interpretation of high-dimensional numerical results for the Anderson transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suslov, I. M., E-mail: suslov@kapitza.ras.ru

    The existence of the upper critical dimension d{sub c2} = 4 for the Anderson transition is a rigorous consequence of the Bogoliubov theorem on renormalizability of φ{sup 4} theory. For d ≥ 4 dimensions, one-parameter scaling does not hold and all existent numerical data should be reinterpreted. These data are exhausted by the results for d = 4, 5 from scaling in quasi-one-dimensional systems and the results for d = 4, 5, 6 from level statistics. All these data are compatible with the theoretical scaling dependences obtained from Vollhardt and Wolfle’s self-consistent theory of localization. The widespread viewpoint that d{submore » c2} = ∞ is critically discussed.« less

  3. Linearly first- and second-order, unconditionally energy stable schemes for the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu

    2017-02-01

    In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less

  4. An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function

    NASA Technical Reports Server (NTRS)

    Liang, Shunlin; Strahler, Alan H.

    1992-01-01

    Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.

  5. Skill Assessment for Coupled Biological/Physical Models of Marine Systems.

    PubMed

    Stow, Craig A; Jolliff, Jason; McGillicuddy, Dennis J; Doney, Scott C; Allen, J Icarus; Friedrichs, Marjorie A M; Rose, Kenneth A; Wallhead, Philip

    2009-02-20

    Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.

  6. A rigorous approach to the formulation of extended Born-Oppenheimer equation for a three-state system

    NASA Astrophysics Data System (ADS)

    Sarkar, Biplab; Adhikari, Satrajit

    If a coupled three-state electronic manifold forms a sub-Hilbert space, it is possible to express the non-adiabatic coupling (NAC) elements in terms of adiabatic-diabatic transformation (ADT) angles. Consequently, we demonstrate: (a) Those explicit forms of the NAC terms satisfy the Curl conditions with non-zero Divergences; (b) The formulation of extended Born-Oppenheimer (EBO) equation for any three-state BO system is possible only when there exists coordinate independent ratio of the gradients for each pair of ADT angles leading to zero Curls at and around the conical intersection(s). With these analytic advancements, we formulate a rigorous EBO equation and explore its validity as well as necessity with respect to the approximate one (Sarkar and Adhikari, J Chem Phys 2006, 124, 074101) by performing numerical calculations on two different models constructed with different chosen forms of the NAC elements.

  7. Comparison of theory and direct numerical simulations of drag reduction by rodlike polymers in turbulent channel flows.

    PubMed

    Benzi, Roberto; Ching, Emily S C; De Angelis, Elisabetta; Procaccia, Itamar

    2008-04-01

    Numerical simulations of turbulent channel flows, with or without additives, are limited in the extent of the Reynolds number (Re) and Deborah number (De). The comparison of such simulations to theories of drag reduction, which are usually derived for asymptotically high Re and De, calls for some care. In this paper we present a study of drag reduction by rodlike polymers in a turbulent channel flow using direct numerical simulation and illustrate how these numerical results should be related to the recently developed theory.

  8. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  9. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    PubMed

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  10. Spacecraft VHF Radio Propagation Analysis in Ocean Environments Including Atmospheric Effects

    NASA Technical Reports Server (NTRS)

    Hwu, Shian; Moreno, Gerardo; Desilva, Kanishka; Jih, CIndy

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the National Aeronautics and Space Administration (NASA)/Johnson Space Center (JSC) is tasked to perform spacecraft and ground network communication system simulations. The CSSL has developed simulation tools that model spacecraft communication systems and the space/ground environment in which they operate. This paper is to analyze a spacecraft's very high frequency (VHF) radio signal propagation and the impact to performance when landing in an ocean. Very little research work has been done for VHF radio systems in a maritime environment. Rigorous Radio Frequency (RF) modeling/simulation techniques were employed for various environmental effects. The simulation results illustrate the significance of the environmental effects on the VHF radio system performance.

  11. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    PubMed

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  12. Characterization of the spectral phase of an intense laser at focus via ionization blueshift

    DOE PAGES

    Mittelberger, D. E.; Nakamura, K.; Lehe, R.; ...

    2016-01-01

    An in situ diagnostic for verifying the spectral phase of an intense laser pulse at focus is shown. This diagnostic relies on measuring the effect of optical compression on ionization-induced blueshifting of the laser spectrum. Experimental results from the Berkeley Lab Laser Accelerator, a laser source rigorously characterized by conventional techniques, are presented and compared with simulations to illustrate the utility of this technique. These simulations show distinguishable effects from second-, third-, and fourth-order spectral phase.

  13. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  14. Numerical Modeling of HgCdTe Solidification: Effects of Phase Diagram, Double-Diffusion Convection and Microgravity Level

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1997-01-01

    Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.

  15. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.

    PubMed

    Drake, Andrew W; Klakamp, Scott L

    2007-01-10

    A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.

  16. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  17. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  18. Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions

    NASA Astrophysics Data System (ADS)

    Alves, E. Paulo; Mori, Warren; Fiuza, Frederico

    2017-10-01

    The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.

  19. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    NASA Astrophysics Data System (ADS)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown. Topics covered describe why ProLE solutions are needed from an economic and technical aspect, a high level discussion of how the distributive system works, speed benchmarking, and finally, a brief survey of applications including advanced aberrations for lens sensitivity and flare studies, optical-proximity-correction for a bitcell and an application that will allow evaluation of the potential of a design to have systematic failures during fabrication.

  20. Space radiator simulation system analysis

    NASA Technical Reports Server (NTRS)

    Black, W. Z.; Wulff, W.

    1972-01-01

    A transient heat transfer analysis was carried out on a space radiator heat rejection system exposed to an arbitrarily prescribed combination of aerodynamic heating, solar, albedo, and planetary radiation. A rigorous analysis was carried out for the radiation panel and tubes lying in one plane and an approximate analysis was used to extend the rigorous analysis to the case of a curved panel. The analysis permits the consideration of both gaseous and liquid coolant fluids, including liquid metals, under prescribed, time dependent inlet conditions. The analysis provided a method for predicting: (1) transient and steady-state, two dimensional temperature profiles, (2) local and total heat rejection rates, (3) coolant flow pressure in the flow channel, and (4) total system weight and protection layer thickness.

  1. Arnold diffusion in the planar elliptic restricted three-body problem: mechanism and numerical verification

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Gidea, Marian; de la Llave, Rafael

    2017-01-01

    We present a diffusion mechanism for time-dependent perturbations of autonomous Hamiltonian systems introduced in Gidea (2014 arXiv:1405.0866). This mechanism is based on shadowing of pseudo-orbits generated by two dynamics: an ‘outer dynamics’, given by homoclinic trajectories to a normally hyperbolic invariant manifold, and an ‘inner dynamics’, given by the restriction to that manifold. On the inner dynamics the only assumption is that it preserves area. Unlike other approaches, Gidea (2014 arXiv:1405.0866) does not rely on the KAM theory and/or Aubry-Mather theory to establish the existence of diffusion. Moreover, it does not require to check twist conditions or non-degeneracy conditions near resonances. The conditions are explicit and can be checked by finite precision calculations in concrete systems (roughly, they amount to checking that Melnikov-type integrals do not vanish and that some manifolds are transversal). As an application, we study the planar elliptic restricted three-body problem. We present a rigorous theorem that shows that if some concrete calculations yield a non zero value, then for any sufficiently small, positive value of the eccentricity of the orbits of the main bodies, there are orbits of the infinitesimal body that exhibit a change of energy that is bigger than some fixed number, which is independent of the eccentricity. We verify numerically these calculations for values of the masses close to that of the Jupiter/Sun system. The numerical calculations are not completely rigorous, because we ignore issues of round-off error and do not estimate the truncations, but they are not delicate at all by the standard of numerical analysis. (Standard tests indicate that we get 7 or 8 figures of accuracy where 1 would be enough.) The code of these verifications is available. We hope that some full computer assisted proofs will be obtained in the near future since there are packages (CAPD) designed for problems of this type.

  2. Assessment of uncertainty in the numerical simulation of solar irradiance over inclined PV panels: New algorithms using measurements and modeling tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit; Dooraghi, Mike

    Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less

  3. Assessment of uncertainty in the numerical simulation of solar irradiance over inclined PV panels: New algorithms using measurements and modeling tools

    DOE PAGES

    Xie, Yu; Sengupta, Manajit; Dooraghi, Mike

    2018-03-20

    Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less

  4. Simulating the universe(s): from cosmic bubble collisions to cosmological observables with numerical relativity

    NASA Astrophysics Data System (ADS)

    Wainwright, Carroll L.; Johnson, Matthew C.; Peiris, Hiranya V.; Aguirre, Anthony; Lehner, Luis; Liebling, Steven L.

    2014-03-01

    The theory of eternal inflation in an inflaton potential with multiple vacua predicts that our universe is one of many bubble universes nucleating and growing inside an ever-expanding false vacuum. The collision of our bubble with another could provide an important observational signature to test this scenario. We develop and implement an algorithm for accurately computing the cosmological observables arising from bubble collisions directly from the Lagrangian of a single scalar field. We first simulate the collision spacetime by solving Einstein's equations, starting from nucleation and ending at reheating. Taking advantage of the collision's hyperbolic symmetry, the simulations are performed with a 1+1-dimensional fully relativistic code that uses adaptive mesh refinement. We then calculate the comoving curvature perturbation in an open Friedmann-Robertson-Walker universe, which is used to determine the temperature anisotropies of the cosmic microwave background radiation. For a fiducial Lagrangian, the anisotropies are well described by a power law in the cosine of the angular distance from the center of the collision signature. For a given form of the Lagrangian, the resulting observational predictions are inherently statistical due to stochastic elements of the bubble nucleation process. Further uncertainties arise due to our imperfect knowledge about inflationary and pre-recombination physics. We characterize observational predictions by computing the probability distributions over four phenomenological parameters which capture these intrinsic and model uncertainties. This represents the first fully-relativistic set of predictions from an ensemble of scalar field models giving rise to eternal inflation, yielding significant differences from previous non-relativistic approximations. Thus, our results provide a basis for a rigorous confrontation of these theories with cosmological data.

  5. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less

  6. Using Microanalytical Simulation Methods in Educational Evaluation: An Exploratory Study

    ERIC Educational Resources Information Center

    Sondergeld, Toni A.; Beltyukova, Svetlana A.; Fox, Christine M.; Stone, Gregory E.

    2012-01-01

    Scientifically based research used to inform evidence based school reform efforts has been required by the federal government in order to receive grant funding since the reenactment of No Child Left Behind (2002). Educational evaluators are thus faced with the challenge to use rigorous research designs to establish causal relationships. However,…

  7. Robust simulation of buckled structures using reduced order modeling

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  8. The effects of the photomask on multiphase shift test monitors

    NASA Astrophysics Data System (ADS)

    McIntyre, Gregory; Neureuther, Andrew

    2006-10-01

    A series of chromeless multiple-phase shift lithographic test monitors have been previously introduced. This paper investigates various effects that impact the performance of these monitors, focusing primarily on PSM Polarimetry, a technique to monitor illumination polarization. The measurement sensitivities from a variety of scalar and rigorous electromagnetic simulations are compared to experimental results from three industrial quality multi-phase test reticles. This analysis enables the relative importance of the various effects to be identified and offers the industry unique insight into various issues associated with the photomask. First, the unavoidable electromagnetic interaction as light propagates through the multiple phase steps of the mask topography appears to account for about 10 to 20% of the lost sensitivity, when experimental results are compared to an ideal simulated case. The polarization dependence of this effect is analyzed, concluding that the 4-phase topography is more effective at manipulating TM polarization. Second, various difficulties in the fabrication of these complicated mask patterns are described and likely account for an additional 60-80% loss in sensitivity. Smaller effects are also described, associated with the photoresist, mask design and subtle differences in the proximity effect of TE and TM polarization of off-axis light at high numerical aperture. Finally, the question: "How practical is PSM polarimetry?" is considered. It is concluded that, despite many severe limiting factors, an accurately calibrated test reticle promises to monitor polarization in state-of-the-art lithography scanners to within about 2%.

  9. Numerical Simulation of Nanostructure Growth

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    Nanoscale structures, such as nanowires and carbon nanotubes (CNTs), are often grown in gaseous or plasma environments. Successful growth of these structures is defined by achieving a specified crystallinity or chirality, size or diameter, alignment, etc., which in turn depend on gas mixture ratios. pressure, flow rate, substrate temperature, and other operating conditions. To date, there has not been a rigorous growth model that addresses the specific concerns of crystalline nanowire growth, while demonstrating the correct trends of the processing conditions on growth rates. Most crystal growth models are based on the Burton, Cabrera, and Frank (BCF) method, where adatoms are incorporated into a growing crystal at surface steps or spirals. When the supersaturation of the vapor is high, islands nucleate to form steps, and these steps subsequently spread (grow). The overall bulk growth rate is determined by solving for the evolving motion of the steps. Our approach is to use a phase field model to simulate the growth of finite sized nanowire crystals, linking the free energy equation with the diffusion equation of the adatoms. The phase field method solves for an order parameter that defines the evolving steps in a concentration field. This eliminates the need for explicit front tracking/location, or complicated shadowing routines, both of which can be computationally expensive, particularly in higher dimensions. We will present results demonstrating the effect of process conditions, such as substrate temperature, vapor supersaturation, etc. on the evolving morphologies and overall growth rates of the nanostructures.

  10. PediaFlow™ Maglev Ventricular Assist Device: A Prescriptive Design Approach.

    PubMed

    Antaki, James F; Ricci, Michael R; Verkaik, Josiah E; Snyder, Shaun T; Maul, Timothy M; Kim, Jeongho; Paden, Dave B; Kameneva, Marina V; Paden, Bradley E; Wearden, Peter D; Borovetz, Harvey S

    2010-03-01

    This report describes a multi-disciplinary program to develop a pediatric blood pump, motivated by the critical need to treat infants and young children with congenital and acquired heart diseases. The unique challenges of this patient population require a device with exceptional biocompatibility, miniaturized for implantation up to 6 months. This program implemented a collaborative, prescriptive design process, whereby mathematical models of the governing physics were coupled with numerical optimization to achieve a favorable compromise among several competing design objectives. Computational simulations of fluid dynamics, electromagnetics, and rotordynamics were performed in two stages: first using reduced-order formulations to permit rapid optimization of the key design parameters; followed by rigorous CFD and FEA simulations for calibration, validation, and detailed optimization. Over 20 design configurations were initially considered, leading to three pump topologies, judged on the basis of a multi-component analysis including criteria for anatomic fit, performance, biocompatibility, reliability, and manufacturability. This led to fabrication of a mixed-flow magnetically levitated pump, the PF3, having a displaced volume of 16.6 cc, approximating the size of a AA battery and producing a flow capacity of 0.3-1.5 L/min. Initial in vivo evaluation demonstrated excellent hemocompatibility after 72 days of implantation in an ovine. In summary, combination of prescriptive and heuristic design principles have proven effective in developing a miniature magnetically levitated blood pump with excellent performance and biocompatibility, suitable for integration into chronic circulatory support system for infants and young children; aiming for a clinical trial within 3 years.

  11. ELECTRON ACCELERATION IN CONTRACTING MAGNETIC ISLANDS DURING SOLAR FLARES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovikov, D.; Tenishev, V.; Gombosi, T. I.

    Electron acceleration in solar flares is well known to be efficient at generating energetic particles that produce the observed bremsstrahlung X-ray spectra. One mechanism proposed to explain the observations is electron acceleration within contracting magnetic islands formed by magnetic reconnection in the flare current sheet. In a previous study, a numerical magnetohydrodynamic simulation of an eruptive solar flare was analyzed to estimate the associated electron acceleration due to island contraction. That analysis used a simple analytical model for the island structure and assumed conservation of the adiabatic invariants of particle motion. In this paper, we perform the first-ever rigorous integrationmore » of the guiding-center orbits of electrons in a modeled flare. An initially isotropic distribution of particles is seeded in a contracting island from the simulated eruption, and the subsequent evolution of these particles is followed using guiding-center theory. We find that the distribution function becomes increasingly anisotropic over time as the electrons’ energy increases by up to a factor of five, in general agreement with the previous study. In addition, we show that the energized particles are concentrated on the Sunward side of the island, adjacent to the reconnection X-point in the flare current sheet. Furthermore, our analysis demonstrates that the electron energy gain is dominated by betatron acceleration in the compressed, strengthened magnetic field of the contracting island. Fermi acceleration by the shortened field lines of the island also contributes to the energy gain, but it is less effective than the betatron process.« less

  12. Analysis of temporal gene expression profiles: clustering by simulated annealing and determining the optimal number of clusters.

    PubMed

    Lukashin, A V; Fuchs, R

    2001-05-01

    Cluster analysis of genome-wide expression data from DNA microarray hybridization studies has proved to be a useful tool for identifying biologically relevant groupings of genes and samples. In the present paper, we focus on several important issues related to clustering algorithms that have not yet been fully studied. We describe a simple and robust algorithm for the clustering of temporal gene expression profiles that is based on the simulated annealing procedure. In general, this algorithm guarantees to eventually find the globally optimal distribution of genes over clusters. We introduce an iterative scheme that serves to evaluate quantitatively the optimal number of clusters for each specific data set. The scheme is based on standard approaches used in regular statistical tests. The basic idea is to organize the search of the optimal number of clusters simultaneously with the optimization of the distribution of genes over clusters. The efficiency of the proposed algorithm has been evaluated by means of a reverse engineering experiment, that is, a situation in which the correct distribution of genes over clusters is known a priori. The employment of this statistically rigorous test has shown that our algorithm places greater than 90% genes into correct clusters. Finally, the algorithm has been tested on real gene expression data (expression changes during yeast cell cycle) for which the fundamental patterns of gene expression and the assignment of genes to clusters are well understood from numerous previous studies.

  13. Differences between wafer and bake plate temperature uniformity in proximity bake: a theoretical and experimental study

    NASA Astrophysics Data System (ADS)

    Ramanan, Natarajan; Kozman, Austin; Sims, James B.

    2000-06-01

    As the lithography industry moves toward finer features, specifications on temperature uniformity of the bake plates are expected to become more stringent. Consequently, aggressive improvements are needed to conventional bake station designs to make them perform significantly better than current market requirements. To this end, we have conducted a rigorous study that combines state-of-the-art simulation tools and experimental methods to predict the impact of the parameters that influence the uniformity of the wafer in proximity bake. The key observation from this detailed study is that the temperature uniformity of the wafer in proximity mode depends on a number of parameters in addition to the uniformity of the bake plate itself. These parameters include the lid design, the air flow distribution around the bake chamber, bake plate design and flatness of the bake plate and wafer. By performing careful experimental studies that were guided by extensive numerical simulations, we were able to understand the relative importance of each of these parameters. In an orderly fashion, we made appropriate design changes to curtail or eliminate the nonuniformity caused by each of these parameters. After implementing all these changes, we have now been able to match or improve the temperature uniformity of the wafer in proximity with that of a contact measurement on the bake plate. The wafer temperature uniformity is also very close to the theoretically predicted uniformity of the wafer.

  14. Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section

    NASA Technical Reports Server (NTRS)

    Taflove, Allen; Umashankar, Korada R.

    1989-01-01

    Applications of the finite-difference time-domain (FD-TD) method for numerical modeling of electromagnetic wave interactions with structures are reviewed, concentrating on scattering and radar cross section (RCS). A number of two- and three-dimensional examples of FD-TD modeling of scattering and penetration are provided. The objects modeled range in nature from simple geometric shapes to extremely complex aerospace and biological systems. Rigorous analytical or experimental validatons are provided for the canonical shapes, and it is shown that FD-TD predictive data for near fields and RCS are in excellent agreement with the benchmark data. It is concluded that with continuing advances in FD-TD modeling theory for target features relevant to the RCS problems and in vector and concurrent supercomputer technology, it is likely that FD-TD numerical modeling will occupy an important place in RCS technology in the 1990s and beyond.

  15. Well-balanced high-order solver for blood flow in networks of vessels with variable properties.

    PubMed

    Müller, Lucas O; Toro, Eleuterio F

    2013-12-01

    We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Long-range temporal correlations in the Kardar-Parisi-Zhang growth: numerical simulations

    NASA Astrophysics Data System (ADS)

    Song, Tianshu; Xia, Hui

    2016-11-01

    To analyze long-range temporal correlations in surface growth, we study numerically the (1  +  1)-dimensional Kardar-Parisi-Zhang (KPZ) equation driven by temporally correlated noise, and obtain the scaling exponents based on two different numerical methods. Our simulations show that the numerical results are in good agreement with the dynamic renormalization group (DRG) predictions, and are also consistent with the simulation results of the ballistic deposition (BD) model.

  17. Numerical simulation of the effect of regular and sub-caliber projectiles on military bunkers

    NASA Astrophysics Data System (ADS)

    Jiricek, Pavel; Foglar, Marek

    2015-09-01

    One of the most demanding topics in blast and impact engineering is the modelling of projectile impact. To introduce this topic, a set of numerical simulations was undertaken. The simulations study the impact of regular and sub-calibre projectile on Czech pre-WW2 military bunkers. The penetrations of the military objects are well documented and can be used for comparison. The numerical model composes of a part from a wall of a military object. The concrete block is subjected to an impact of a regular and sub-calibre projectile. The model is divided into layers to simplify the evaluation of the results. The simulations are processed within ANSYS AUTODYN software. A nonlinear material model of with damage and incorporated strain-rate effect was used. The results of the numerical simulations are evaluated in means of the damage of the concrete block. Progress of the damage is described versus time. The numerical simulation provides good agreement with the documented penetrations.

  18. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  19. System Simulation by Recursive Feedback: Coupling a Set of Stand-Alone Subsystem Simulations

    NASA Technical Reports Server (NTRS)

    Nixon, D. D.

    2001-01-01

    Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.

  20. Experimental And Numerical Evaluation Of Gaseous Agents For Suppressing Cup-Burner Flames In Low Gravity

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2003-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station (ISS) increase the likelihood of accidental fires. NASA's fire safety program for human-crewed space flight is based largely on removing ignition sources and controlling the flammability of the material on-board. There is ongoing research to improve the flammability characterization of materials in low gravity; however, very little research has been conducted on fire suppression in the low-gravity environment. Although the existing suppression systems aboard the Space Shuttle (halon 1301, CF3Br) and the ISS (CO2 or water-based form) may continue to be used, alternative effective agents or techniques are desirable for long-duration missions. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of analytical models, which include detailed combustion-suppression chemistry and radiation sub-models, so that the model can be used to interpret (and predict) the suppression behavior in low gravity; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.

  1. Cooperative peer-to-peer multiagent-based systems

    NASA Astrophysics Data System (ADS)

    Caram, L. F.; Caiafa, C. F.; Ausloos, M.; Proto, A. N.

    2015-08-01

    A multiagent based model for a system of cooperative agents aiming at growth is proposed. This is based on a set of generalized Verhulst-Lotka-Volterra differential equations. In this study, strong cooperation is allowed among agents having similar sizes, and weak cooperation if agents have markedly different "sizes", thus establishing a peer-to-peer modulated interaction scheme. A rigorous analysis of the stable configurations is presented first examining the fixed points of the system, next determining their stability as a function of the model parameters. It is found that the agents are self-organizing into clusters. Furthermore, it is demonstrated that, depending on parameter values, multiple stable configurations can coexist. It occurs that only one of them always emerges with probability close to one, because its associated attractor dominates over the rest. This is shown through numerical integrations and simulations, after analytic developments. In contrast to the competitive case, agents are able to increase their capacity beyond the no-interaction case limit. In other words, when some collaborative partnership among a relatively small number of partners takes place, all agents act in good faith prioritizing the common good, when receiving a mutual benefit allowing them to surpass their capacity.

  2. Cooperative peer-to-peer multiagent-based systems.

    PubMed

    Caram, L F; Caiafa, C F; Ausloos, M; Proto, A N

    2015-08-01

    A multiagent based model for a system of cooperative agents aiming at growth is proposed. This is based on a set of generalized Verhulst-Lotka-Volterra differential equations. In this study, strong cooperation is allowed among agents having similar sizes, and weak cooperation if agents have markedly different "sizes", thus establishing a peer-to-peer modulated interaction scheme. A rigorous analysis of the stable configurations is presented first examining the fixed points of the system, next determining their stability as a function of the model parameters. It is found that the agents are self-organizing into clusters. Furthermore, it is demonstrated that, depending on parameter values, multiple stable configurations can coexist. It occurs that only one of them always emerges with probability close to one, because its associated attractor dominates over the rest. This is shown through numerical integrations and simulations, after analytic developments. In contrast to the competitive case, agents are able to increase their capacity beyond the no-interaction case limit. In other words, when some collaborative partnership among a relatively small number of partners takes place, all agents act in good faith prioritizing the common good, when receiving a mutual benefit allowing them to surpass their capacity.

  3. Moran-evolution of cooperation: From well-mixed to heterogeneous complex networks

    NASA Astrophysics Data System (ADS)

    Sarkar, Bijan

    2018-05-01

    Configurational arrangement of network architecture and interaction character of individuals are two most influential factors on the mechanisms underlying the evolutionary outcome of cooperation, which is explained by the well-established framework of evolutionary game theory. In the current study, not only qualitatively but also quantitatively, we measure Moran-evolution of cooperation to support an analytical agreement based on the consequences of the replicator equation in a finite population. The validity of the measurement has been double-checked in the well-mixed network by the Langevin stochastic differential equation and the Gillespie-algorithmic version of Moran-evolution, while in a structured network, the measurement of accuracy is verified by the standard numerical simulation. Considering the Birth-Death and Death-Birth updating rules through diffusion of individuals, the investigation is carried out in the wide range of game environments those relate to the various social dilemmas where we are able to draw a new rigorous mathematical track to tackle the heterogeneity of complex networks. The set of modified criteria reveals the exact fact about the emergence and maintenance of cooperation in the structured population. We find that in general, nature promotes the environment of coexistent traits.

  4. Phylogenetic Quantification of Intra-tumour Heterogeneity

    PubMed Central

    Schwarz, Roland F.; Trinh, Anne; Sipos, Botond; Brenton, James D.; Goldman, Nick; Markowetz, Florian

    2014-01-01

    Intra-tumour genetic heterogeneity is the result of ongoing evolutionary change within each cancer. The expansion of genetically distinct sub-clonal populations may explain the emergence of drug resistance, and if so, would have prognostic and predictive utility. However, methods for objectively quantifying tumour heterogeneity have been missing and are particularly difficult to establish in cancers where predominant copy number variation prevents accurate phylogenetic reconstruction owing to horizontal dependencies caused by long and cascading genomic rearrangements. To address these challenges, we present MEDICC, a method for phylogenetic reconstruction and heterogeneity quantification based on a Minimum Event Distance for Intra-tumour Copy-number Comparisons. Using a transducer-based pairwise comparison function, we determine optimal phasing of major and minor alleles, as well as evolutionary distances between samples, and are able to reconstruct ancestral genomes. Rigorous simulations and an extensive clinical study show the power of our method, which outperforms state-of-the-art competitors in reconstruction accuracy, and additionally allows unbiased numerical quantification of tumour heterogeneity. Accurate quantification and evolutionary inference are essential to understand the functional consequences of tumour heterogeneity. The MEDICC algorithms are independent of the experimental techniques used and are applicable to both next-generation sequencing and array CGH data. PMID:24743184

  5. Crustal fingering: solidification on a viscously unstable interface

    NASA Astrophysics Data System (ADS)

    Fu, Xiaojing; Jimenez-Martinez, Joaquin; Cueto-Felgueroso, Luis; Porter, Mark; Juanes, Ruben

    2017-11-01

    Motivated by the formation of gas hydrates in seafloor sediments, here we study the volumetric expansion of a less viscous gas pocket into a more viscous liquid when the gas-liquid interfaces readily solidify due to hydrate formation. We first present a high-pressure microfluidic experiment to study the depressurization-controlled expansion of a Xenon gas pocket in a water-filled Hele-Shaw cell. The evolution of the pocket is controlled by three processes: (1) volumetric expansion of the gas; (2) rupturing of existing hydrate films on the gas-liquid interface; and (3) formation of new hydrate films. These result in gas fingering leading to a complex labyrinth pattern. To reproduce these observations, we propose a phase-field model that describes the formation of hydrate shell on viscously unstable interfaces. We design the free energy of the three-phase system to rigorously account for interfacial effects, gas compressibility and phase transitions. We model the hydrate shell as a highly viscous fluid with shear-thinning rheology to reproduce shell-rupturing behavior. We present high-resolution numerical simulations of the model, which illustrate the emergence of complex crustal fingering patterns as a result of gas expansion dynamics modulated by hydrate growth at the interface.

  6. Finite-Time and Fixed-Time Cluster Synchronization With or Without Pinning Control.

    PubMed

    Liu, Xiwei; Chen, Tianping

    2018-01-01

    In this paper, the finite-time and fixed-time cluster synchronization problem for complex networks with or without pinning control are discussed. Finite-time (or fixed-time) synchronization has been a hot topic in recent years, which means that the network can achieve synchronization in finite-time, and the settling time depends on the initial values for finite-time synchronization (or the settling time is bounded by a constant for any initial values for fixed-time synchronization). To realize the finite-time and fixed-time cluster synchronization, some simple distributed protocols with or without pinning control are designed and the effectiveness is rigorously proved. Several sufficient criteria are also obtained to clarify the effects of coupling terms for finite-time and fixed-time cluster synchronization. Especially, when the cluster number is one, the cluster synchronization becomes the complete synchronization problem; when the network has only one node, the coupling term between nodes will disappear, and the synchronization problem becomes the simplest master-slave case, which also includes the stability problem for nonlinear systems like neural networks. All these cases are also discussed. Finally, numerical simulations are presented to demonstrate the correctness of obtained theoretical results.

  7. Convective heat transfer in a measurement cell for scanning electrochemical microscopy.

    PubMed

    Novev, Javor K; Compton, Richard G

    2016-11-21

    Electrochemical experiments, especially those performed with scanning electrochemical microscopy (SECM), are often carried out without taking special care to thermostat the solution; it is usually assumed that its temperature is homogeneous and equal to the ambient. The present study aims to test this assumption via numerical simulations of the heat transfer in a particular system - the typical measurement cell for SECM. It is assumed that the temperature of the solution is initially homogeneous but different from that of its surroundings; convective heat transfer in the solution and the surrounding air is taken into account within the framework of the Boussinesq approximation. The hereby presented theoretical treatment indicates that an initial temperature difference of the order of 1 K dissipates with a characteristic time scale of ∼1000 s; the thermal equilibration is accompanied by convective flows with a maximum velocity of ∼10 -4 m s -1 ; furthermore, the temporal evolution of the temperature profile is influenced by the sign of the initial difference. These results suggest that, unless the temperature of the solution is rigorously controlled, convection may significantly compromise the interpretation of data from SECM and other electrochemical techniques, which is usually done on the basis of diffusion-only models.

  8. Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu

    2017-10-01

    In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.

  9. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review

    DOE PAGES

    Zuo, Chao; Huang, Lei; Zhang, Minliang; ...

    2016-05-06

    In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based onmore » a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.« less

  10. Modeling direct band-to-band tunneling: From bulk to quantum-confined semiconductor devices

    NASA Astrophysics Data System (ADS)

    Carrillo-Nuñez, H.; Ziegler, A.; Luisier, M.; Schenk, A.

    2015-06-01

    A rigorous framework to study direct band-to-band tunneling (BTBT) in homo- and hetero-junction semiconductor nanodevices is introduced. An interaction Hamiltonian coupling conduction and valence bands (CVBs) is derived using a multiband envelope method. A general form of the BTBT probability is then obtained from the linear response to the "CVBs interaction" that drives the system out of equilibrium. Simple expressions in terms of the one-electron spectral function are developed to compute the BTBT current in two- and three-dimensional semiconductor structures. Additionally, a two-band envelope equation based on the Flietner model of imaginary dispersion is proposed for the same purpose. In order to characterize their accuracy and differences, both approaches are compared with full-band, atomistic quantum transport simulations of Ge, InAs, and InAs-Si Esaki diodes. As another numerical application, the BTBT current in InAs-Si nanowire tunnel field-effect transistors is computed. It is found that both approaches agree with high accuracy. The first one is considerably easier to conceive and could be implemented straightforwardly in existing quantum transport tools based on the effective mass approximation to account for BTBT in nanodevices.

  11. Adaptive tracking control for active suspension systems with non-ideal actuators

    NASA Astrophysics Data System (ADS)

    Pan, Huihui; Sun, Weichao; Jing, Xingjian; Gao, Huijun; Yao, Jianyong

    2017-07-01

    As a critical component of transportation vehicles, active suspension systems are instrumental in the improvement of ride comfort and maneuverability. However, practical active suspensions commonly suffer from parameter uncertainties (e.g., the variations of payload mass and suspension component parameters), external disturbances and especially the unknown non-ideal actuators (i.e., dead-zone and hysteresis nonlinearities), which always significantly deteriorate the control performance in practice. To overcome these issues, this paper synthesizes an adaptive tracking control strategy for vehicle suspension systems to achieve suspension performance improvements. The proposed control algorithm is formulated by developing a unified framework of non-ideal actuators rather than a separate way, which is a simple yet effective approach to remove the unexpected nonlinear effects. From the perspective of practical implementation, the advantages of the presented controller for active suspensions include that the assumptions on the measurable actuator outputs, the prior knowledge of nonlinear actuator parameters and the uncertain parameters within a known compact set are not required. Furthermore, the stability of the closed-loop suspension system is theoretically guaranteed by rigorous mathematical analysis. Finally, the effectiveness of the presented adaptive control scheme is confirmed using comparative numerical simulation validations.

  12. Narrow groove plasmonic nano-gratings for surface plasmon resonance sensing

    PubMed Central

    Dhawan, Anuj; Canva, Michael; Vo-Dinh, Tuan

    2011-01-01

    We present a novel surface plasmon resonance (SPR) configuration based on narrow groove (sub-15 nm) plasmonic nano-gratings such that normally incident radiation can be coupled into surface plasmons without the use of prism-coupling based total internal reflection, as in the classical Kretschmann configuration. This eliminates the angular dependence requirements of SPR-based sensing and allows development of robust miniaturized SPR sensors. Simulations based on Rigorous Coupled Wave Analysis (RCWA) were carried out to numerically calculate the reflectance - from different gold and silver nano-grating structures - as a function of the localized refractive index of the media around the SPR nano-gratings as well as the incident radiation wavelength and angle of incidence. Our calculations indicate substantially higher differential reflectance signals, on localized change of refractive index in the narrow groove plasmonic gratings, as compared to those obtained from conventional SPR-based sensing systems. Furthermore, these calculations allow determination of the optimal nano-grating geometric parameters - i. e. nanoline periodicity, spacing between the nanolines, as well as the height of the nanolines in the nano-grating - for highest sensitivity to localized change of refractive index, as would occur due to binding of a biomolecule target to a functionalized nano-grating surface. PMID:21263620

  13. Material property analytical relations for the case of an AFM probe tapping a viscoelastic surface containing multiple characteristic times

    PubMed Central

    López-Guerra, Enrique A

    2017-01-01

    We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450

  14. Line-source excitation of realistic conformal metasurface cloaks

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-11-01

    Following our recently introduced analytical tools to model and design conformal mantle cloaks based on metasurfaces [Padooru et al., J. Appl. Phys. 112, 034907 (2012)], we investigate their performance and physical properties when excited by an electric line source placed in their close proximity. We consider metasurfaces formed by 2-D arrays of slotted (meshes and Jerusalem cross slots) and printed (patches and Jerusalem crosses) sub-wavelength elements. The electromagnetic scattering analysis is carried out using a rigorous analytical model, which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. It is shown that the homogenized grid-impedance expressions, originally derived for planar arrays of sub-wavelength elements and plane-wave excitation, may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks illuminated by near-field sources. Our closed-form analytical results are in good agreement with full-wave numerical simulations, up to sub-wavelength distances from the metasurface, confirming that mantle cloaks may be very effective to suppress the scattering of moderately sized objects, independent of the type of excitation and point of observation. We also discuss the dual functionality of these metasurfaces to boost radiation efficiency and directivity from confined near-field sources.

  15. Improvement in electron-beam lithography throughput by exploiting relaxed patterning fidelity requirements with directed self-assembly

    NASA Astrophysics Data System (ADS)

    Yu, Hao Yun; Liu, Chun-Hung; Shen, Yu Tian; Lee, Hsuan-Ping; Tsai, Kuen Yu

    2014-03-01

    Line edge roughness (LER) influencing the electrical performance of circuit components is a key challenge for electronbeam lithography (EBL) due to the continuous scaling of technology feature sizes. Controlling LER within an acceptable tolerance that satisfies International Technology Roadmap for Semiconductors requirements while achieving high throughput become a challenging issue. Although lower dosage and more-sensitive resist can be used to improve throughput, they would result in serious LER-related problems because of increasing relative fluctuation in the incident positions of electrons. Directed self-assembly (DSA) is a promising technique to relax LER-related pattern fidelity (PF) requirements because of its self-healing ability, which may benefit throughput. To quantify the potential of throughput improvement in EBL by introducing DSA for post healing, rigorous numerical methods are proposed to simultaneously maximize throughput by adjusting writing parameters of EBL systems subject to relaxed LER-related PF requirements. A fast, continuous model for parameter sweeping and a hybrid model for more accurate patterning prediction are employed for the patterning simulation. The tradeoff between throughput and DSA self-healing ability is investigated. Preliminary results indicate that significant throughput improvements are achievable at certain process conditions.

  16. Analysis of MUSIC-type imaging functional for single, thin electromagnetic inhomogeneity in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang

    2015-06-01

    This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.

  17. Enhancing the brightness of electrically driven single-photon sources using color centers in silicon carbide

    NASA Astrophysics Data System (ADS)

    Khramtsov, Igor A.; Vyshnevyy, Andrey A.; Fedyanin, Dmitry Yu.

    2018-03-01

    Practical applications of quantum information technologies exploiting the quantum nature of light require efficient and bright true single-photon sources which operate under ambient conditions. Currently, point defects in the crystal lattice of diamond known as color centers have taken the lead in the race for the most promising quantum system for practical non-classical light sources. This work is focused on a different quantum optoelectronic material, namely a color center in silicon carbide, and reveals the physics behind the process of single-photon emission from color centers in SiC under electrical pumping. We show that color centers in silicon carbide can be far superior to any other quantum light emitter under electrical control at room temperature. Using a comprehensive theoretical approach and rigorous numerical simulations, we demonstrate that at room temperature, the photon emission rate from a p-i-n silicon carbide single-photon emitting diode can exceed 5 Gcounts/s, which is higher than what can be achieved with electrically driven color centers in diamond or epitaxial quantum dots. These findings lay the foundation for the development of practical photonic quantum devices which can be produced in a well-developed CMOS compatible process flow.

  18. Redefinition of the self-bias voltage in a dielectrically shielded thin sheath RF discharge

    NASA Astrophysics Data System (ADS)

    Ho, Teck Seng; Charles, Christine; Boswell, Rod

    2018-05-01

    In a geometrically asymmetric capacitively coupled discharge where the powered electrode is shielded from the plasma by a layer of dielectric material, the self-bias manifests as a nonuniform negative charging in the dielectric rather than on the blocking capacitor. In the thin sheath regime where the ion transit time across the powered sheath is on the order of or less than the Radiofrequency (RF) period, the plasma potential is observed to respond asymmetrically to extraneous impedances in the RF circuit. Consequently, the RF waveform on the plasma-facing surface of the dielectric is unknown, and the behaviour of the powered sheath is not easily predictable. Sheath circuit models become inadequate for describing this class of discharges, and a comprehensive fluid, electrical, and plasma numerical model is employed to accurately quantify this behaviour. The traditional definition of the self-bias voltage as the mean of the RF waveform is shown to be erroneous in this regime. Instead, using the maxima of the RF waveform provides a more rigorous definition given its correlation with the ion dynamics in the powered sheath. This is supported by a RF circuit model derived from the computational fluid dynamics and plasma simulations.

  19. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  20. Resolution requirements for numerical simulations of transition

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Krist, Steven E.; Hussaini, M. Yousuff

    1989-01-01

    The resolution requirements for direct numerical simulations of transition to turbulence are investigated. A reliable resolution criterion is determined from the results of several detailed simulations of channel and boundary-layer transition.

  1. Numerical Simulation of Selecting Model Scale of Cable in Wind Tunnel Test

    NASA Astrophysics Data System (ADS)

    Huang, Yifeng; Yang, Jixin

    The numerical simulation method based on computational Fluid Dynamics (CFD) provides a possible alternative means of physical wind tunnel test. Firstly, the correctness of the numerical simulation method is validated by one certain example. In order to select the minimum length of the cable as to a certain diameter in the numerical wind tunnel tests, the numerical wind tunnel tests based on CFD are carried out on the cables with several different length-diameter ratios (L/D). The results show that, when the L/D reaches to 18, the drag coefficient is stable essentially.

  2. A comparison of numerical methods for the prediction of two-dimensional heat transfer in an electrothermal deicer pad. M.S. Thesis. Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1988-01-01

    Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  3. Tensile Properties of Dyneema SK76 Single Fibers at Multiple Loading Rates Using a Direct Gripping Method

    DTIC Science & Technology

    2014-06-01

    lower density compared with aramid fibers such as Kevlar and Twaron. Numerical modeling is used to design more effective fiber-based composite armor...in measuring fibers and doing experiments. vi INTENTIONALLY LEFT BLANK. 1 1. Introduction Aramid fibers such as Kevlar (DuPont) and Twaron...methyl methacrylate blocks. The efficacy of this method to grip Kevlar fibers has been rigorously studied using a variety of statistical methods at

  4. On the Far-Zone Electromagnetic Field of a Horizontal Electric Dipole Over an Imperfectly Conducting Half-Space With Extensions to Plasmonics

    NASA Astrophysics Data System (ADS)

    Michalski, Krzysztof A.; Lin, Hung-I.

    2018-01-01

    Second-order asymptotic formulas for the electromagnetic fields of a horizontal electric dipole over an imperfectly conducting half-space are derived using the modified saddle-point method. Application examples are presented for ordinary and plasmonic media, and the accuracy of the new formulation is assessed by comparisons with two alternative state-of-the-art theories and with the rigorous results of numerical integration.

  5. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  6. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  7. Bifurcation Analysis Using Rigorous Branch and Bound Methods

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Crespo, Luis G.; Munoz, Cesar A.; Lowenberg, Mark H.

    2014-01-01

    For the study of nonlinear dynamic systems, it is important to locate the equilibria and bifurcations occurring within a specified computational domain. This paper proposes a new approach for solving these problems and compares it to the numerical continuation method. The new approach is based upon branch and bound and utilizes rigorous enclosure techniques to yield outer bounding sets of both the equilibrium and local bifurcation manifolds. These sets, which comprise the union of hyper-rectangles, can be made to be as tight as desired. Sufficient conditions for the existence of equilibrium and bifurcation points taking the form of algebraic inequality constraints in the state-parameter space are used to calculate their enclosures directly. The enclosures for the bifurcation sets can be computed independently of the equilibrium manifold, and are guaranteed to contain all solutions within the computational domain. A further advantage of this method is the ability to compute a near-maximally sized hyper-rectangle of high dimension centered at a fixed parameter-state point whose elements are guaranteed to exclude all bifurcation points. This hyper-rectangle, which requires a global description of the bifurcation manifold within the computational domain, cannot be obtained otherwise. A test case, based on the dynamics of a UAV subject to uncertain center of gravity location, is used to illustrate the efficacy of the method by comparing it with numerical continuation and to evaluate its computational complexity.

  8. Coincidental match of numerical simulation and physics

    NASA Astrophysics Data System (ADS)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  9. Rigorous ILT optimization for advanced patterning and design-process co-optimization

    NASA Astrophysics Data System (ADS)

    Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming

    2018-03-01

    Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.

  10. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  11. Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely

    2015-12-01

    We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.

  12. Fast simulation of the NICER instrument

    NASA Astrophysics Data System (ADS)

    Doty, John P.; Wampler-Doty, Matthew P.; Prigozhin, Gregory Y.; Okajima, Takashi; Arzoumanian, Zaven; Gendreau, Keith

    2016-07-01

    The NICER1 mission uses a complicated physical system to collect information from objects that are, by x-ray timing science standards, rather faint. To get the most out of the data we will need a rigorous understanding of all instrumental effects. We are in the process of constructing a very fast, high fidelity simulator that will help us to assess instrument performance, support simulation-based data reduction, and improve our estimates of measurement error. We will combine and extend existing optics, detector, and electronics simulations. We will employ the Compute Unified Device Architecture (CUDA2) to parallelize these calculations. The price of suitable CUDA-compatible multi-giga op cores is about $0.20/core, so this approach will be very cost-effective.

  13. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  14. Generalized Cahn-Hilliard equation for solutions with drastically different diffusion coefficients. Application to exsolution in ternary feldspar

    NASA Astrophysics Data System (ADS)

    Petrishcheva, E.; Abart, R.

    2012-04-01

    We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.

  15. Nonconstant Positive Steady States and Pattern Formation of 1D Prey-Taxis Systems

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Song, Yang; Shao, Lingjie

    2017-02-01

    Prey-taxis is the process that predators move preferentially toward patches with highest density of prey. It is well known to have an important role in biological control and the maintenance of biodiversity. To model the coexistence and spatial distributions of predator and prey species, this paper concerns nonconstant positive steady states of a wide class of prey-taxis systems with general functional responses over 1D domain. Linearized stability of the positive equilibrium is analyzed to show that prey-taxis destabilizes prey-predator homogeneity when prey repulsion (e.g., due to volume-filling effect in predator species or group defense in prey species) is present, and prey-taxis stabilizes the homogeneity otherwise. Then, we investigate the existence and stability of nonconstant positive steady states to the system through rigorous bifurcation analysis. Moreover, we provide detailed and thorough calculations to determine properties such as pitchfork and turning direction of the local branches. Our stability results also provide a stable wave mode selection mechanism for thee reaction-advection-diffusion systems including prey-taxis models considered in this paper. Finally, we provide numerical studies of prey-taxis systems with Holling-Tanner kinetics to illustrate and support our theoretical findings. Our numerical simulations demonstrate that the 2× 2 prey-taxis system is able to model the formation and evolution of various striking patterns, such as spikes, periodic oscillations, and coarsening even when the domain is one-dimensional. These dynamics can model the coexistence and spatial distributions of interacting prey and predator species. We also give some insights on how system parameters influence pattern formation in these models.

  16. Giant Linear Nonreciprocity, Zero Reflection, and Zero Band Gap in Equilibrated Space-Time-Varying Media

    NASA Astrophysics Data System (ADS)

    Taravati, Sajjad

    2018-06-01

    This article presents a class of space-time-varying media with giant linear nonreciprocity, zero space-time local reflections, and zero photonic band gap. This is achieved via equilibrium in the electric and magnetic properties of unidirectionally space-time-modulated media. The enhanced nonreciprocity is accompanied by a larger sonic regime interval which provides extra design freedom for achieving strong nonreciprocity by a weak pumping strength. We show that the width of photonic band gaps in general periodic space-time permittivity- and permeability-modulated media is proportional to the absolute difference between the electric and magnetic pumping strengths. We derive a rigorous analytical solution for investigation of wave propagation and scattering from general periodic space-time permittivity- and permeability-modulated media. In contrast with weak photonic transitions, from the excited mode to its two adjacent modes, in conventional space-time permittivity-modulated media, in an equilibrated space-time-varying medium, strong photonic transitions occur from the excited mode to its four adjacent modes. We study the enhanced nonreciprocity and zero band gap in equilibrated space-time-modulated media by analysis of their dispersion diagrams. In contrast to conventional space-time permittivity-modulated media, equilibrated space-time media exhibit different phase and group velocities for forward and backward harmonics. Furthermore, the numerical simulation scheme of general space-time permittivity- and permeability-modulated media is presented, which is based on the finite-difference time-domain technique. Our analytical and numerical results provide insights into general space-time refractive-index-modulated media, paving the way toward optimal isolators, nonreciprocal integrated systems, and subharmonic frequency generators.

  17. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    PubMed

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.

  18. Assessment of SFR Wire Wrap Simulation Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David

    Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advancedmore » Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results for the 3-D pipe, the single pin THORS mesh, and the 7-pin bundle mesh, respectively.« less

  19. The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems

    DTIC Science & Technology

    2003-09-30

    The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems Dr. Melvyn A. Shapiro NOAA/Office of Weather and Air Quality...predictability of extratropical cyclones. APPROACH My approach toward achieving the above objectives has been to foster national and...TITLE AND SUBTITLE The Analysis, Numerical Simulation, and Diagnosis of Extratropical Weather Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  20. Design and Analysis of an Axisymmetric Phased Array Fed Gregorian Reflector System for Limited Scanning

    DTIC Science & Technology

    2016-01-22

    Numerical electromagnetic simulations based on the multilevel fast multipole method (MLFMM) were used to analyze and optimize the antenna...and are not necessarily endorsed by the United States Government. numerical simulations with the multilevel fast multipole method (MLFMM...and optimized using numerical simulations conducted with the multilevel fast multipole method (MLFMM) using FEKO software (www.feko.info). The

  1. Numerical simulations of quasi-perpendicular collisionless shocks

    NASA Technical Reports Server (NTRS)

    Goodrich, C. C.

    1985-01-01

    Numerical simulations of collisionless quasi-perpendicular shock waves are reviewed. The strengths and limitations of these simulations are discussed and their experimental (laboratory and spacecraft) context is given. Recent simulation results are emphasized that, with ISEE bow shock observations, are responsible for recent progress in understanding quasi-steady shock structure.

  2. Numerical human models for accident research and safety - potentials and limitations.

    PubMed

    Praxl, Norbert; Adamec, Jiri; Muggenthaler, Holger; von Merten, Katja

    2008-01-01

    The method of numerical simulation is frequently used in the area of automotive safety. Recently, numerical models of the human body have been developed for the numerical simulation of occupants. Different approaches in modelling the human body have been used: the finite-element and the multibody technique. Numerical human models representing the two modelling approaches are introduced and the potentials and limitations of these models are discussed.

  3. Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo

    The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.

  4. A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI

    PubMed Central

    Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.

    2016-01-01

    Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244

  5. A Numerical Simulation of a Normal Sonic Jet into a Hypersonic Cross-Flow

    NASA Technical Reports Server (NTRS)

    Jeffries, Damon K.; Krishnamurthy, Ramesh; Chandra, Suresh

    1997-01-01

    This study involves numerical modeling of a normal sonic jet injection into a hypersonic cross-flow. The numerical code used for simulation is GASP (General Aerodynamic Simulation Program.) First the numerical predictions are compared with well established solutions for compressible laminar flow. Then comparisons are made with non-injection test case measurements of surface pressure distributions. Good agreement with the measurements is observed. Currently comparisons are underway with the injection case. All the experimental data were generated at the Southampton University Light Piston Isentropic Compression Tube.

  6. A Level-set based framework for viscous simulation of particle-laden supersonic flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-06-01

    Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.

  7. A novel methodology for litho-to-etch pattern fidelity correction for SADP process

    NASA Astrophysics Data System (ADS)

    Chen, Shr-Jia; Chang, Yu-Cheng; Lin, Arthur; Chang, Yi-Shiang; Lin, Chia-Chi; Lai, Jun-Cheng

    2017-03-01

    For 2x nm node semiconductor devices and beyond, more aggressive resolution enhancement techniques (RETs) such as source-mask co-optimization (SMO), litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) are utilized for the low k1 factor lithography processes. In the SADP process, the pattern fidelity is extremely critical since a slight photoresist (PR) top-loss or profile roughness may impact the later core trim process, due to its sensitivity to environment. During the subsequent sidewall formation and core removal processes, the core trim profile weakness may worsen and induces serious defects that affect the final electrical performance. To predict PR top-loss, a rigorous lithography simulation can provide a reference to modify mask layouts; but it takes a much longer run time and is not capable of full-field mask data preparation. In this paper, we first brought out an algorithm which utilizes multi-intensity levels from conventional aerial image simulation to assess the physical profile through lithography to core trim etching steps. Subsequently, a novel correction method was utilized to improve the post-etch pattern fidelity without the litho. process window suffering. The results not only matched PR top-loss in rigorous lithography simulation, but also agreed with post-etch wafer data. Furthermore, this methodology can also be incorporated with OPC and post-OPC verification to improve core trim profile and final pattern fidelity at an early stage.

  8. Hybrid Particle-Element Simulation of Impact on Composite Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2004-01-01

    This report describes the development of new numerical methods and new constitutive models for the simulation of hypervelocity impact effects on spacecraft. The research has included parallel implementation of the numerical methods and material models developed under the project. Validation work has included both one dimensional simulations, for comparison with exact solutions, and three dimensional simulations of published hypervelocity impact experiments. The validated formulations have been applied to simulate impact effects in a velocity and kinetic energy regime outside the capabilities of current experimental methods. The research results presented here allow for the expanded use of numerical simulation, as a complement to experimental work, in future design of spacecraft for hypervelocity impact effects.

  9. Difficulties in applying numerical simulations to an evaluation of occupational hazards caused by electromagnetic fields

    PubMed Central

    Zradziński, Patryk

    2015-01-01

    Due to the various physical mechanisms of interaction between a worker's body and the electromagnetic field at various frequencies, the principles of numerical simulations have been discussed for three areas of worker exposure: to low frequency magnetic field, to low and intermediate frequency electric field and to radiofrequency electromagnetic field. This paper presents the identified difficulties in applying numerical simulations to evaluate physical estimators of direct and indirect effects of exposure to electromagnetic fields at various frequencies. Exposure of workers operating a plastic sealer have been taken as an example scenario of electromagnetic field exposure at the workplace for discussion of those difficulties in applying numerical simulations. The following difficulties in reliable numerical simulations of workers’ exposure to the electromagnetic field have been considered: workers’ body models (posture, dimensions, shape and grounding conditions), working environment models (objects most influencing electromagnetic field distribution) and an analysis of parameters for which exposure limitations are specified in international guidelines and standards. PMID:26323781

  10. Numerical modeling of separated flows at moderate Reynolds numbers appropriate for turbine blades and unmanned aero vehicles

    NASA Astrophysics Data System (ADS)

    Castiglioni, Giacomo

    Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.

  11. Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.

    PubMed

    Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T

    2013-12-06

    The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.

  12. Numerical simulation of a flow past a triangular sail-type blade of a wind generator using the ANSYS FLUENT software package

    NASA Astrophysics Data System (ADS)

    Kusaiynov, K.; Tanasheva, N. K.; Min'kov, L. L.; Nusupbekov, B. R.; Stepanova, Yu. O.; Rozhkova, A. V.

    2016-02-01

    An air flow past a single triangular sail-type blade of a wind turbine is analyzed by numerical simulation for low velocities of the incoming flow. The results of numerical simulation indicate a monotonic increase in the drag force and the lift force as functions of the incoming flow; empirical dependences of these quantities are obtained.

  13. Coordination, Data Management and Enhancement of the International Arctic Buoy Programme (IABP) a US Interagency Arctic Buoy Programme \\201USIABP\\202 contribution to the IABP

    DTIC Science & Technology

    2013-09-30

    data from the IABP ); 2.) Forecasting weather and sea ice conditions; 3.) Forcing, assimilation and validation of global weather and climate models ...International Arctic Buoy Programme ( IABP ) A US Interagency Arctic Buoy Programme (USIABP) contribution to the IABP Dr. Ignatius G. Rigor Polar...ice motion. These observations are assimilated into Numerical Weather Prediction (NWP) models that are used to forecast weather on synoptic time

  14. Coordination, Data Management and Enhancement of the International Arctic Buoy Programme (IABP), A US Interagency Arctic Buoy Programme (USIABP) Contribution to the IABP

    DTIC Science & Technology

    2012-09-30

    International Arctic Buoy Programme ( IABP ) A US Interagency Arctic Buoy Programme (USIABP) contribution to the IABP Dr. Ignatius G. Rigor Polar...observations of surface meteorology and ice motion. These observations are assimilated into Numerical Weather Prediction (NWP) models that are used to...distribution of sea ice. Over the Arctic Ocean, this fundamental observing network is maintained by the IABP , and is a critical component of the

  15. Performance evaluation of a bigrating as a beam splitter.

    PubMed

    Hwang, R B; Peng, S T

    1997-04-01

    The design of a bigrating for use as a beam splitter is presented. It is based on a rigorous formulation of plane-wave scattering by a bigrating that is composed of two individual gratings oriented in different directions. Numerical results are carried out to optimize the design of a bigrating to perform 1 x 4 beam splitting in two dimensions and to examine its fabrication and operation tolerances. It is found that a bigrating can be designed to perform two functions: beam splitting and polarization purification.

  16. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  17. Boundary acquisition for setup of numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diegert, C.

    1997-12-31

    The author presents a work flow diagram that includes a path that begins with taking experimental measurements, and ends with obtaining insight from results produced by numerical simulation. Two examples illustrate this path: (1) Three-dimensional imaging measurement at micron scale, using X-ray tomography, provides information on the boundaries of irregularly-shaped alumina oxide particles held in an epoxy matrix. A subsequent numerical simulation predicts the electrical field concentrations that would occur in the observed particle configurations. (2) Three-dimensional imaging measurement at meter scale, again using X-ray tomography, provides information on the boundaries fossilized bone fragments in a Parasaurolophus crest recently discoveredmore » in New Mexico. A subsequent numerical simulation predicts acoustic response of the elaborate internal structure of nasal passageways defined by the fossil record. The author must both add value, and must change the format of the three-dimensional imaging measurements before the define the geometric boundary initial conditions for the automatic mesh generation, and subsequent numerical simulation. The author applies a variety of filters and statistical classification algorithms to estimate the extents of the structures relevant to the subsequent numerical simulation, and capture these extents as faceted geometries. The author will describe the particular combination of manual and automatic methods used in the above two examples.« less

  18. Spike-Nosed Bodies and Forward Injected Jets in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Gilinsky, M.; Washington, C.; Blankson, I. M.; Shvets, A. I.

    2002-01-01

    The paper contains new numerical simulation and experimental test results of blunt body drag reduction using thin spikes mounted in front of a body and one- or two-phase jets injected against a supersonic flow. Numerical simulations utilizing the NASA CFL3D code were conducted at the Hampton University Fluid Mechanics and Acoustics Laboratory (FM&AL) and experimental tests were conducted using the facilities of the IM/MSU Aeromechanics and Gas Dynamics Laboratory. Previous results were presented at the 37th AIAA/ASME/SAE/ASEE Joint Propulsion Conference. Those results were based on some experimental and numerical simulation tests for supersonic flow around spike-nosed or shell-nosed bodies, and numerical simulations were conducted only for a single spike-nosed or shell-nosed body at zero attack angle, alpha=0. In this paper, experimental test results of gas, liquid and solid particle jet injection against a supersonic flow are presented. In addition, numerical simulation results for supersonic flow around a multiple spike-nosed body with non-zero attack angles and with a gas and solid particle forward jet injection are included. Aerodynamic coefficients: drag, C(sub D), lift, C(sub L), and longitudinal momentum, M(sub z), obtained by numerical simulation and experimental tests are compared and show good agreement.

  19. Spike-Nosed Bodies and Forward Injected Jets in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Gilinsky, M.; Washington, C.; Blankson, I. M.; Shvets, A. I.

    2002-01-01

    The paper contains new numerical simulation and experimental test results of blunt body drag reduction using thin spikes mounted in front of a body and one- or two-phase jets injected against a supersonic flow. Numerical simulations utilizing the NASA CFL3D code were conducted at the Hampton University Fluid Mechanics and Acoustics Laboratory (FM&AL) and experimental tests were conducted using the facilities of the IM/MSU Aeromechanics and Gas Dynamics Laboratory. Previous results were presented at the 37th AIAA/ASME/SAE/ASEE Joint Propulsion Conference. Those results were based on some experimental and numerical simulation tests for supersonic flow around spike-nosed or shell-nosed bodies, and numerical simulations were conducted only for a single spike-nosed or shell-nosed body at zero attack angle, alpha = 0 degrees. In this paper, experimental test results of gas, liquid and solid particle jet injection against a supersonic flow are presented. In addition, numerical simulation results for supersonic flow around a multiple spike-nosed body with non-zero attack angles and with a gas and solid particle forward jet injection are included. Aerodynamic coefficients: drag, C (sub D), lift, C(sub L), and longitudinal momentum, M(sub z), obtained by numerical simulation and experimental tests are compared and show good agreement.

  20. ULF Waves in the Ionospheric Alfven Resonator: Modeling of MICA Observations

    NASA Astrophysics Data System (ADS)

    Streltsov, A. V.; Tulegenov, B.

    2017-12-01

    We present results from a numerical study of physical processes responsible for the generation of small-scale, intense electromagnetic structures in the ultra-low-frequency range frequently observed in the close vicinity of bright discrete auroral arcs. In particular, our research is focused on the role of the ionosphere in generating these structures. A significant body of observations demonstrate that small-scale electromagnetic waves with frequencies below 1 Hz are detected at high latitudes where the large-scale, downward magnetic field-aligned current (FAC) interact with the ionosphere. Some theoretical studies suggest that these waves can be generated by the ionospheric feedback instability (IFI) inside the ionospheric Alfven resonator (IAR). The IAR is the region in the low-altitude magnetosphere bounded by the strong gradient in the Alfven speed at high altitude and the conducting bottom of the ionosphere (ionospheric E-region) at low altitude. To study ULF waves in this region we use a numerical model developed from reduced two fluid MHD equations describing shear Alfven waves in the ionosphere and magnetosphere of the earth. The active ionospheric feedback on structure and amplitude of magnetic FACs that interact with the ionosphere is implemented through the ionospheric boundary conditions that link the parallel current density with the plasma density and the perpendicular electric field in the ionosphere. Our numerical results are compared with the in situ measurements performed by the Magnetosphere-Ionosphere Coupling in the Alfven Resonator (MICA) sounding rocket, launched on February 19, 2012 from Poker Flat Research Range in Alaska to measure fields and particles during a passage through a discreet auroral arc. Parameters of the simulations are chosen to match actual MICA parameters, allowing the comparison in the most precise and rigorous way. Waves generated in the numerical model have frequencies between 0.30 and 0.45 Hz, while MICA measured similar waves in the range from 0.18 to 0.50 Hz. These results prove that the IFI driven inside the IAR by a system of large-scale upward-downward currents is the main mechanism responsible for the generation of small-scale intense ULF waves in the vicinity of discrete auroral arcs.

  1. Numerical Uncertainties in the Simulation of Reversible Isentropic Processes and Entropy Conservation.

    NASA Astrophysics Data System (ADS)

    Johnson, Donald R.; Lenzen, Allen J.; Zapotocny, Tom H.; Schaack, Todd K.

    2000-11-01

    A challenge common to weather, climate, and seasonal numerical prediction is the need to simulate accurately reversible isentropic processes in combination with appropriate determination of sources/sinks of energy and entropy. Ultimately, this task includes the distribution and transport of internal, gravitational, and kinetic energies, the energies of water substances in all forms, and the related thermodynamic processes of phase changes involved with clouds, including condensation, evaporation, and precipitation processes.All of the processes noted above involve the entropies of matter, radiation, and chemical substances, conservation during transport, and/or changes in entropies by physical processes internal to the atmosphere. With respect to the entropy of matter, a means to study a model's accuracy in simulating internal hydrologic processes is to determine its capability to simulate the appropriate conservation of potential and equivalent potential temperature as surrogates of dry and moist entropy under reversible adiabatic processes in which clouds form, evaporate, and precipitate. In this study, a statistical strategy utilizing the concept of `pure error' is set forth to assess the numerical accuracies of models to simulate reversible processes during 10-day integrations of the global circulation corresponding to the global residence time of water vapor. During the integrations, the sums of squared differences between equivalent potential temperature e numerically simulated by the governing equations of mass, energy, water vapor, and cloud water and a proxy equivalent potential temperature te numerically simulated as a conservative property are monitored. Inspection of the differences of e and te in time and space and the relative frequency distribution of the differences details bias and random errors that develop from nonlinear numerical inaccuracies in the advection and transport of potential temperature and water substances within the global atmosphere.A series of nine global simulations employing various versions of Community Climate Models CCM2 and CCM3-all Eulerian spectral numerics, all semi-Lagrangian numerics, mixed Eulerian spectral, and semi-Lagrangian numerics-and the University of Wisconsin-Madison (UW) isentropic-sigma gridpoint model provides an interesting comparison of numerical accuracies in the simulation of reversibility. By day 10, large bias and random differences were identified in the simulation of reversible processes in all of the models except for the UW isentropic-sigma model. The CCM2 and CCM3 simulations yielded systematic differences that varied zonally, vertically, and temporally. Within the comparison, the UW isentropic-sigma model was superior in transporting water vapor and cloud water/ice and in simulating reversibility involving the conservation of dry and moist entropy. The only relative frequency distribution of differences that appeared optimal, in that the distribution remained unbiased and equilibrated with minimal variance as it remained statistically stationary, was the distribution from the UW isentropic-sigma model. All other distributions revealed nonstationary characteristics with spreading and/or shifting of the maxima as the biases and variances of the numerical differences of e and te amplified.

  2. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  3. Understanding Islamist political violence through computational social simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less

  4. Forecasting production in Liquid Rich Shale plays

    NASA Astrophysics Data System (ADS)

    Nikfarman, Hanieh

    Production from Liquid Rich Shale (LRS) reservoirs is taking center stage in the exploration and production of unconventional reservoirs. Production from the low and ultra-low permeability LRS plays is possible only through multi-fractured horizontal wells (MFHW's). There is no existing workflow that is applicable to forecasting multi-phase production from MFHW's in LRS plays. This project presents a practical and rigorous workflow for forecasting multiphase production from MFHW's in LRS reservoirs. There has been much effort in developing workflows and methodology for forecasting in tight/shale plays in recent years. The existing workflows, however, are applicable only to single phase flow, and are primarily used in shale gas plays. These methodologies do not apply to the multi-phase flow that is inevitable in LRS plays. To account for complexities of multiphase flow in MFHW's the only available technique is dynamic modeling in compositional numerical simulators. These are time consuming and not practical when it comes to forecasting production and estimating reserves for a large number of producers. A workflow was developed, and validated by compositional numerical simulation. The workflow honors physics of flow, and is sufficiently accurate while practical so that an analyst can readily apply it to forecast production and estimate reserves in a large number of producers in a short period of time. To simplify the complex multiphase flow in MFHW, the workflow divides production periods into an initial period where large production and pressure declines are expected, and the subsequent period where production decline may converge into a common trend for a number of producers across an area of interest in the field. Initial period assumes the production is dominated by single-phase flow of oil and uses the tri-linear flow model of Erdal Ozkan to estimate the production history. Commercial software readily available can simulate flow and forecast production in this period. In the subsequent Period, dimensionless rate and dimensionless time functions are introduced that help identify transition from initial period into subsequent period. The production trends in terms of the dimensionless parameters converge for a range of rock permeability and stimulation intensity. This helps forecast production beyond transition to the end of life of well. This workflow is applicable to single fluid system.

  5. Code Validation Studies of High-Enthalpy Flows

    DTIC Science & Technology

    2006-12-01

    stage of future hypersonic vehicles. The development and design of such vehicles is aided by the use of experimentation and numerical simulation... numerical predictions and experimental measurements. 3. Summary of Previous Work We have studied extensively hypersonic double-cone flows with and in...the experimental measurements and the numerical predictions. When we accounted for that effect in numerical simulations, and also augmented the

  6. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  7. Growth and wall-transpiration control of nonlinear unsteady Görtler vortices forced by free-stream vortical disturbances

    NASA Astrophysics Data System (ADS)

    Marensi, Elena; Ricco, Pierre

    2017-11-01

    The generation, nonlinear evolution, and wall-transpiration control of unsteady Görtler vortices in an incompressible boundary layer over a concave plate is studied theoretically and numerically. Görtler rolls are initiated and driven by free-stream vortical perturbations of which only the low-frequency components are considered because they penetrate the most into the boundary layer. The formation and development of the disturbances are governed by the nonlinear unsteady boundary-region equations with the centrifugal force included. These equations are subject to appropriate initial and outer boundary conditions, which account for the influence of the upstream and free-stream forcing in a rigorous and mutually consistent manner. Numerical solutions show that the stabilizing effect on nonlinearity, which also occurs in flat-plate boundary layers, is significantly enhanced in the presence of centrifugal forces. Sufficiently downstream, the nonlinear vortices excited at different free-stream turbulence intensities Tu saturate at the same level, proving that the initial amplitude of the forcing becomes unimportant. At low Tu, the disturbance exhibits a quasi-exponential growth with the growth rate being intensified for more curved plates and for lower frequencies. At higher Tu, in the typical range of turbomachinery applications, the Görtler vortices do not undergo a modal stage as nonlinearity saturates rapidly, and the wall curvature does not affect the boundary-layer response. Good quantitative agreement with data from direct numerical simulations and experiments is obtained. Steady spanwise-uniform and spanwise-modulated zero-mass-flow-rate wall transpiration is shown to attenuate the growth of the Görtler vortices significantly. A novel modified version of the Fukagata-Iwamoto-Kasagi identity, used for the first time to study a transitional flow, reveals which terms in the streamwise momentum balance are mostly affected by the wall transpiration, thus offering insight into the increased nonlinear growth of the wall-shear stress.

  8. A new framework for climate sensitivity and prediction: a modelling perspective

    NASA Astrophysics Data System (ADS)

    Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank

    2016-03-01

    The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.

  9. Multi-dimensional high order essentially non-oscillatory finite difference methods in generalized coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1992-01-01

    The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.

  10. The International Arctic Buoy Programme (IABP)

    NASA Astrophysics Data System (ADS)

    Rigor, I. G.; Ortmeyer, M.

    2003-12-01

    The Arctic has undergone dramatic changes in weather, climate and environment. It should be noted that many of these changes were first observed and studied using data from the International Arctic Buoy Programme (IABP). For example, IABP data were fundamental to Walsh et al. (1996) showing that atmospheric pressure has decreased, Rigor et al. (2000) showing that air temperatures have increased, and to Proshutinsky and Johnson (1997); Steele and Boyd, (1998); Kwok, (2000); and Rigor et al. (2002) showing that the clockwise circulation of sea ice and the ocean has weakened. All these results relied heavily on data from the IABP. In addition to supporting these studies of climate change, the IABP observations are also used to forecast weather and ice conditions, validate satellite retrievals of environmental variables, to force, validate and initialize numerical models. Over 350 papers have been written using data from the IABP. The observations and datasets of the IABP data are one of the cornerstones for environmental forecasting and research in the Arctic.

  11. A Rigorous Investigation on the Ground State of the Penson-Kolb Model

    NASA Astrophysics Data System (ADS)

    Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi

    2003-05-01

    By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002

  12. A proposed study of multiple scattering through clouds up to 1 THz

    NASA Technical Reports Server (NTRS)

    Gerace, G. C.; Smith, E. K.

    1992-01-01

    A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.

  13. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    NASA Astrophysics Data System (ADS)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  14. A comparative study between shielded and open coplanar waveguide discontinuities

    NASA Technical Reports Server (NTRS)

    Dib, Nihad I.; Harokopus, W. P., Jr.; Ponchak, G. E.; Katehi, L. P. B.

    1993-01-01

    A comparative study between open and shielded coplanar waveguide (CPW) discontinuities is presented. The space domain integral equation method is used to characterize several discontinuities such as the open-end CPW and CPW series stubs. Two different geometries of CPW series stubs (straight and bent stubs) are compared with respect to resonant frequency and radiation loss. In addition, the encountered radiation loss due to different CPW shunt stubs is evaluated experimentally. The notion of forced radiation simulation is presented, and the results of such a simulation are compared to the actual radiation loss obtained rigorously. It is shown that such a simulation cannot give reliable results concerning radiation loss from printed circuits.

  15. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  16. Studying Turbulence Using Numerical Simulation Databases - X Proceedings of the 2004 Summer Program

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Mansour, Nagi N.

    2004-01-01

    This Proceedings volume contains 32 papers that span a wide range of topics that reflect the ubiquity of turbulence. The papers have been divided into six groups: 1) Solar Simulations; 2) Magnetohydrodynamics (MHD); 3) Large Eddy Simulation (LES) and Numerical Simulations; 4) Reynolds Averaged Navier Stokes (RANS) Modeling and Simulations; 5) Stability and Acoustics; 6) Combustion and Multi-Phase Flow.

  17. Numerical Simulation of Transit-Time Ultrasonic Flowmeters by a Direct Approach.

    PubMed

    Luca, Adrian; Marchiano, Regis; Chassaing, Jean-Camille

    2016-06-01

    This paper deals with the development of a computational code for the numerical simulation of wave propagation through domains with a complex geometry consisting in both solids and moving fluids. The emphasis is on the numerical simulation of ultrasonic flowmeters (UFMs) by modeling the wave propagation in solids with the equations of linear elasticity (ELE) and in fluids with the linearized Euler equations (LEEs). This approach requires high performance computing because of the high number of degrees of freedom and the long propagation distances. Therefore, the numerical method should be chosen with care. In order to minimize the numerical dissipation which may occur in this kind of configuration, the numerical method employed here is the nodal discontinuous Galerkin (DG) method. Also, this method is well suited for parallel computing. To speed up the code, almost all the computational stages have been implemented to run on graphical processing unit (GPU) by using the compute unified device architecture (CUDA) programming model from NVIDIA. This approach has been validated and then used for the two-dimensional simulation of gas UFMs. The large contrast of acoustic impedance characteristic to gas UFMs makes their simulation a real challenge.

  18. Numerical Modeling of Active Flow Control in a Boundary Layer Ingesting Offset Inlet

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Owens, Lewis R.; Berrier, Bobby L.

    2004-01-01

    This investigation evaluates the numerical prediction of flow distortion and pressure recovery for a boundary layer ingesting offset inlet with active flow control devices. The numerical simulations are computed using a Reynolds averaged Navier-Stokes code developed at NASA. The numerical results are validated by comparison to experimental wind tunnel tests conducted at NASA Langley Research Center at both low and high Mach numbers. Baseline comparisons showed good agreement between numerical and experimental results. Numerical simulations for the inlet with passive and active flow control also showed good agreement at low Mach numbers where experimental data has already been acquired. Numerical simulations of the inlet at high Mach numbers with flow control jets showed an improvement of the flow distortion. Studies on the location of the jet actuators, for the high Mach number case, were conducted to provide guidance for the design of a future experimental wind tunnel test.

  19. The Development of Rigorously Correct, Dynamical Pseudopotentials for Use in Mixed Quantum/Classical Molecular Dynamics Simulations in the Condensed Phase

    NASA Astrophysics Data System (ADS)

    Kahros, Argyris

    Incorporating quantum mechanics into an atomistic simulation necessarily involves solving the Schrodinger equation. Unfortunately, the computational expense associated with solving this equation scales miserably with the number of included quantum degrees of freedom (DOF). The situation is so dire, in fact, that a molecular dynamics (MD) simulation cannot include more than a small number of quantum DOFs before it becomes computationally intractable. Thus, if one were to simulate a relatively large system, such as one containing several hundred atoms or molecules, it would be unreasonable to attempt to include the effects of all of the electrons associated with all of the components of the system. The mixed quantum/classical (MQC) approach provides a way to circumvent this issue. It involves treating the vast majority of the system classically, which incurs minimal computational expense, and reserves the consideration of quantum mechanical effects for only the few degrees of freedom more directly involved in the chemical phenomenon being studied. For example, if one were to study the bonding of a single diatomic molecule in the gas phase, one could employ a MQC approach by treating the nuclei of the molecule's two atoms classically---including the deeply bound, low-energy electrons that change relatively little---and solving the Schrodinger equation only for the high energy electron(s) directly involved in the bonding of the classical cores. In such a way, one could study the bonding of this molecule in a rigorous fashion while treating only the directly related degrees of freedom quantum mechanically. Pseudopotentials are then responsible for dictating the interactions between the quantum and classical degrees of freedom. As these potentials are the sole link between the quantum and classical DOFs, their proper development is of the utmost importance. This Thesis is concerned primarily with my work on the development of novel, rigorous and dynamical pseudopotentials for use in mixed quantum/ classical simulations in the condensed phase. The pseudopotentials discussed within are constructed in an ab initio fashion, without the introduction of any empiricism, and are able to exactly reproduce the results of higher level, fully quantum mechanical Hartree-Fock calculations. A recurring theme in the following pages is overcoming the so-called frozen core approximation (FCA). This essentially comes down to creating pseudopotentials that are able to respond in some way to the local molecular environment in a rigorous fashion. The various methods and discussions that are part of this document are presented in the context of two particular systems. The first is the sodium dimer cation molecule, which serves as a proof of concept for the development of coordinate-dependent pseudopotentials and is the subject of Chapters 2 and 3. Next, the hydrated electron---the excess electron in liquid water---is tackled in an effort to address the recent controversy concerning its true structure and is the subject of Chapters 4 and 5. In essence, the work in this Dissertation is concerned with finding new ways to overcome the problem of a lack of infinite computer processing power.

  20. Numerical study of magnetic nanofluids flow in the round channel located in the constant magnetic field

    NASA Astrophysics Data System (ADS)

    Pryazhnikov, Maxim; Guzei, Dmitriy; Minakov, Andrey; Rodionova, Tatyana

    2017-10-01

    In this paper, the study of ferromagnetic nanoparticles behaviour in the constant magnetic field is carried out. For numerical simulation we have used Euler-Lagrange two-component approach. Using numerical simulation we have studied the growth of deposition of nanoparticles on the channel walls depending on the Reynolds number and the position of the magnet. The flow pattern, the concentration field and the trajectory of nanoparticles as a function of the Reynolds number were obtained. The good qualitative and quantitative agreement between numerical simulation and experiments was shown.

Top