Sample records for rigorous numerical modeling

  1. Forward modelling of global gravity fields with 3D density structures and an application to the high-resolution ( 2 km) gravity fields of the Moon

    NASA Astrophysics Data System (ADS)

    Šprlák, M.; Han, S.-C.; Featherstone, W. E.

    2017-12-01

    Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.

  2. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  3. On the Modeling of Shells in Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Bauchau, Olivier A.; Choi, Jou-Young; Bottasso, Carlo L.

    2000-01-01

    Energy preserving/decaying schemes are presented for the simulation of the nonlinear multibody systems involving shell components. The proposed schemes are designed to meet four specific requirements: unconditional nonlinear stability of the scheme, a rigorous treatment of both geometric and material nonlinearities, exact satisfaction of the constraints, and the presence of high frequency numerical dissipation. The kinematic nonlinearities associated with arbitrarily large displacements and rotations of shells are treated in a rigorous manner, and the material nonlinearities can be handled when the, constitutive laws stem from the existence of a strain energy density function. The efficiency and robustness of the proposed approach is illustrated with specific numerical examples that also demonstrate the need for integration schemes possessing high frequency numerical dissipation.

  4. Mathematical and Numerical Analysis of Model Equations on Interactions of the HIV/AIDS Virus and the Immune System

    NASA Astrophysics Data System (ADS)

    Parumasur, N.; Willie, R.

    2008-09-01

    We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.

  5. Rigorous simulations of a helical core fiber by the use of transformation optics formalism.

    PubMed

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2014-09-22

    We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.

  6. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence

    PubMed Central

    Kelly, David; Majda, Andrew J.; Tong, Xin T.

    2015-01-01

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335

  7. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    PubMed

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  8. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  9. On analyticity of linear waves scattered by a layered medium

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2017-10-01

    The scattering of linear waves by periodic structures is a crucial phenomena in many branches of applied physics and engineering. In this paper we establish rigorous analytic results necessary for the proper numerical analysis of a class of High-Order Perturbation of Surfaces methods for simulating such waves. More specifically, we prove a theorem on existence and uniqueness of solutions to a system of partial differential equations which model the interaction of linear waves with a multiply layered periodic structure in three dimensions. This result provides hypotheses under which a rigorous numerical analysis could be conducted for recent generalizations to the methods of Operator Expansions, Field Expansions, and Transformed Field Expansions.

  10. Resonant tunneling assisted propagation and amplification of plasmons in high electron mobility transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhardwaj, Shubhendu; Sensale-Rodriguez, Berardi; Xing, Huili Grace

    A rigorous theoretical and computational model is developed for the plasma-wave propagation in high electron mobility transistor structures with electron injection from a resonant tunneling diode at the gate. We discuss the conditions in which low-loss and sustainable plasmon modes can be supported in such structures. The developed analytical model is used to derive the dispersion relation for these plasmon-modes. A non-linear full-wave-hydrodynamic numerical solver is also developed using a finite difference time domain algorithm. The developed analytical solutions are validated via the numerical solution. We also verify previous observations that were based on a simplified transmission line model. Itmore » is shown that at high levels of negative differential conductance, plasmon amplification is indeed possible. The proposed rigorous models can enable accurate design and optimization of practical resonant tunnel diode-based plasma-wave devices for terahertz sources, mixers, and detectors, by allowing a precise representation of their coupling when integrated with other electromagnetic structures.« less

  11. Immersed boundary lattice Boltzmann model based on multiple relaxation times

    NASA Astrophysics Data System (ADS)

    Lu, Jianhua; Han, Haifeng; Shi, Baochang; Guo, Zhaoli

    2012-01-01

    As an alterative version of the lattice Boltzmann models, the multiple relaxation time (MRT) lattice Boltzmann model introduces much less numerical boundary slip than the single relaxation time (SRT) lattice Boltzmann model if some special relationship between the relaxation time parameters is chosen. On the other hand, most current versions of the immersed boundary lattice Boltzmann method, which was first introduced by Feng and improved by many other authors, suffer from numerical boundary slip as has been investigated by Le and Zhang. To reduce such a numerical boundary slip, an immerse boundary lattice Boltzmann model based on multiple relaxation times is proposed in this paper. A special formula is given between two relaxation time parameters in the model. A rigorous analysis and the numerical experiments carried out show that the numerical boundary slip reduces dramatically by using the present model compared to the single-relaxation-time-based model.

  12. Imaging 2D optical diffuse reflectance in skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, Janaka; Yao, Gang

    2007-04-01

    We discovered a unique pattern of optical reflectance from fresh prerigor skeletal muscles, which can not be described using existing theories. A numerical fitting function was developed to quantify the equiintensity contours of acquired reflectance images. Using this model, we studied the changes of reflectance profile during stretching and rigor process. We found that the prominent anisotropic features diminished after rigor completion. These results suggested that muscle sarcomere structures played important roles in modulating light propagation in whole muscle. When incorporating the sarcomere diffraction in a Monte Carlo model, we showed that the resulting reflectance profiles quantitatively resembled the experimental observation.

  13. Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian A.

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.

  14. Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere

    NASA Technical Reports Server (NTRS)

    Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.

    1975-01-01

    The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.

  15. Approximation Methods for Inverse Problems Governed by Nonlinear Parabolic Systems

    DTIC Science & Technology

    1999-12-17

    We present a rigorous theoretical framework for approximation of nonlinear parabolic systems with delays in the context of inverse least squares...numerical results demonstrating the convergence are given for a model of dioxin uptake and elimination in a distributed liver model that is a special case of the general theoretical framework .

  16. A Rigorous Sharp Interface Limit of a Diffuse Interface Model Related to Tumor Growth

    NASA Astrophysics Data System (ADS)

    Rocca, Elisabetta; Scala, Riccardo

    2017-06-01

    In this paper, we study the rigorous sharp interface limit of a diffuse interface model related to the dynamics of tumor growth, when a parameter ɛ, representing the interface thickness between the tumorous and non-tumorous cells, tends to zero. More in particular, we analyze here a gradient-flow-type model arising from a modification of the recently introduced model for tumor growth dynamics in Hawkins-Daruud et al. (Int J Numer Math Biomed Eng 28:3-24, 2011) (cf. also Hilhorst et al. Math Models Methods Appl Sci 25:1011-1043, 2015). Exploiting the techniques related to both gradient flows and gamma convergence, we recover a condition on the interface Γ relating the chemical and double-well potentials, the mean curvature, and the normal velocity.

  17. Numerical Simulation of Partially-Coherent Broadband Optical Imaging Using the FDTD Method

    PubMed Central

    Çapoğlu, İlker R.; White, Craig A.; Rogers, Jeremy D.; Subramanian, Hariharan; Taflove, Allen; Backman, Vadim

    2012-01-01

    Rigorous numerical modeling of optical systems has attracted interest in diverse research areas ranging from biophotonics to photolithography. We report the full-vector electromagnetic numerical simulation of a broadband optical imaging system with partially-coherent and unpolarized illumination. The scattering of light from the sample is calculated using the finite-difference time-domain (FDTD) numerical method. Geometrical optics principles are applied to the scattered light to obtain the intensity distribution at the image plane. Multilayered object spaces are also supported by our algorithm. For the first time, numerical FDTD calculations are directly compared to and shown to agree well with broadband experimental microscopy results. PMID:21540939

  18. Bootstrapping the (A1, A2) Argyres-Douglas theory

    NASA Astrophysics Data System (ADS)

    Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro

    2018-03-01

    We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.

  19. Numerical parametric studies of spray combustion instability

    NASA Technical Reports Server (NTRS)

    Pindera, M. Z.

    1993-01-01

    A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.

  20. Skill Assessment for Coupled Biological/Physical Models of Marine Systems.

    PubMed

    Stow, Craig A; Jolliff, Jason; McGillicuddy, Dennis J; Doney, Scott C; Allen, J Icarus; Friedrichs, Marjorie A M; Rose, Kenneth A; Wallhead, Philip

    2009-02-20

    Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.

  1. Rigorous Numerical Study of Low-Period Windows for the Quadratic Map

    NASA Astrophysics Data System (ADS)

    Galias, Zbigniew

    An efficient method to find all low-period windows for the quadratic map is proposed. The method is used to obtain very accurate rigorous bounds of positions of all periodic windows with periods p ≤ 32. The contribution of period-doubling windows on the total width of periodic windows is discussed. Properties of periodic windows are studied numerically.

  2. Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section

    NASA Technical Reports Server (NTRS)

    Taflove, Allen; Umashankar, Korada R.

    1989-01-01

    Applications of the finite-difference time-domain (FD-TD) method for numerical modeling of electromagnetic wave interactions with structures are reviewed, concentrating on scattering and radar cross section (RCS). A number of two- and three-dimensional examples of FD-TD modeling of scattering and penetration are provided. The objects modeled range in nature from simple geometric shapes to extremely complex aerospace and biological systems. Rigorous analytical or experimental validatons are provided for the canonical shapes, and it is shown that FD-TD predictive data for near fields and RCS are in excellent agreement with the benchmark data. It is concluded that with continuing advances in FD-TD modeling theory for target features relevant to the RCS problems and in vector and concurrent supercomputer technology, it is likely that FD-TD numerical modeling will occupy an important place in RCS technology in the 1990s and beyond.

  3. Numerical Modeling of HgCdTe Solidification: Effects of Phase Diagram, Double-Diffusion Convection and Microgravity Level

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1997-01-01

    Melt convection, along with species diffusion and segregation on the solidification interface are the primary factors responsible for species redistribution during HgCdTe crystal growth from the melt. As no direct information about convection velocity is available, numerical modeling is a logical approach to estimate convection. Furthermore influence of microgravity level, double-diffusion and material properties should be taken into account. In the present study, HgCdTe is considered as a binary alloy with melting temperature available from a phase diagram. The numerical model of convection and solidification of binary alloy is based on the general equations of heat and mass transfer in two-dimensional region. Mathematical modeling of binary alloy solidification is still a challenging numericial problem. A Rigorous mathematical approach to this problem is available only when convection is not considered at all. The proposed numerical model was developed using the finite element code FIDAP. In the present study, the numerical model is used to consider thermal, solutal convection and a double diffusion source of mass transport.

  4. Numerical Modeling of Sub-Wavelength Anti-Reflective Structures for Solar Module Applications

    PubMed Central

    Han, Katherine; Chang, Chih-Hung

    2014-01-01

    This paper reviews the current progress in mathematical modeling of anti-reflective subwavelength structures. Methods covered include effective medium theory (EMT), finite-difference time-domain (FDTD), transfer matrix method (TMM), the Fourier modal method (FMM)/rigorous coupled-wave analysis (RCWA) and the finite element method (FEM). Time-based solutions to Maxwell’s equations, such as FDTD, have the benefits of calculating reflectance for multiple wavelengths of light per simulation, but are computationally intensive. Space-discretized methods such as FDTD and FEM output field strength results over the whole geometry and are capable of modeling arbitrary shapes. Frequency-based solutions such as RCWA/FMM and FEM model one wavelength per simulation and are thus able to handle dispersion for regular geometries. Analytical approaches such as TMM are appropriate for very simple thin films. Initial disadvantages such as neglect of dispersion (FDTD), inaccuracy in TM polarization (RCWA), inability to model aperiodic gratings (RCWA), and inaccuracy with metallic materials (FDTD) have been overcome by most modern software. All rigorous numerical methods have accurately predicted the broadband reflection of ideal, graded-index anti-reflective subwavelength structures; ideal structures are tapered nanostructures with periods smaller than the wavelengths of light of interest and lengths that are at least a large portion of the wavelengths considered. PMID:28348287

  5. A Rigorous Investigation on the Ground State of the Penson-Kolb Model

    NASA Astrophysics Data System (ADS)

    Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi

    2003-05-01

    By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002

  6. Beyond the Quantitative and Qualitative Divide: Research in Art Education as Border Skirmish.

    ERIC Educational Resources Information Center

    Sullivan, Graeme

    1996-01-01

    Analyzes a research project that utilizes a coherent conceptual model of art education research incorporating the demand for empirical rigor and providing for diverse interpretive frameworks. Briefly profiles the NUD*IST (Non-numerical Unstructured Data Indexing Searching and Theorizing) software system that can organize and retrieve complex…

  7. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2008-01-01

    An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  8. An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function

    NASA Technical Reports Server (NTRS)

    Liang, Shunlin; Strahler, Alan H.

    1992-01-01

    Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.

  9. A Study of the Behavior and Micromechanical Modelling of Granular Soil. Volume 3. A Numerical Investigation of the Behavior of Granular Media Using Nonlinear Discrete Element Simulation

    DTIC Science & Technology

    1991-05-22

    plasticity, including those of DiMaggio and Sandier (1971), Baladi and Rohani (1979), Lade (1977), Prevost (1978, 1985), Dafalias and Herrmann (1982). In...distribution can be achieved only if the behavior at the contact is fully understood and rigorously modelled. 18 REFERENCES Baladi , G.Y. and Rohani, B. (1979

  10. Two Novel Methods and Multi-Mode Periodic Solutions for the Fermi-Pasta-Ulam Model

    NASA Astrophysics Data System (ADS)

    Arioli, Gianni; Koch, Hans; Terracini, Susanna

    2005-04-01

    We introduce two novel methods for studying periodic solutions of the FPU β-model, both numerically and rigorously. One is a variational approach, based on the dual formulation of the problem, and the other involves computer-assisted proofs. These methods are used e.g. to construct a new type of solutions, whose energy is spread among several modes, associated with closely spaced resonances.

  11. Rigor or mortis: best practices for preclinical research in neuroscience.

    PubMed

    Steward, Oswald; Balice-Gordon, Rita

    2014-11-05

    Numerous recent reports document a lack of reproducibility of preclinical studies, raising concerns about potential lack of rigor. Examples of lack of rigor have been extensively documented and proposals for practices to improve rigor are appearing. Here, we discuss some of the details and implications of previously proposed best practices and consider some new ones, focusing on preclinical studies relevant to human neurological and psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  13. Rigor "and" Relevance: Enhancing High School Students' Math Skills through Career and Technical Education

    ERIC Educational Resources Information Center

    Stone, James R., III; Alfeld, Corinne; Pearson, Donna

    2008-01-01

    Numerous high school students, including many who are enrolled in career and technical education (CTE) courses, do not have the math skills necessary for today's high-skill workplace or college entrance requirements. This study tests a model for enhancing mathematics instruction in five high school CTE programs (agriculture, auto technology,…

  14. Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow

    NASA Astrophysics Data System (ADS)

    Barker, Blake

    2014-10-01

    We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.

  15. Numerical Modelling of Ground Penetrating Radar Antennas

    NASA Astrophysics Data System (ADS)

    Giannakis, Iraklis; Giannopoulos, Antonios; Pajewski, Lara

    2014-05-01

    Numerical methods are needed in order to solve Maxwell's equations in complicated and realistic problems. Over the years a number of numerical methods have been developed to do so. Amongst them the most popular are the finite element, finite difference implicit techniques, frequency domain solution of Helmontz equation, the method of moments, transmission line matrix method. However, the finite-difference time-domain method (FDTD) is considered to be one of the most attractive choice basically because of its simplicity, speed and accuracy. FDTD first introduced in 1966 by Kane Yee. Since then, FDTD has been established and developed to be a very rigorous and well defined numerical method for solving Maxwell's equations. The order characteristics, accuracy and limitations are rigorously and mathematically defined. This makes FDTD reliable and easy to use. Numerical modelling of Ground Penetrating Radar (GPR) is a very useful tool which can be used in order to give us insight into the scattering mechanisms and can also be used as an alternative approach to aid data interpretation. Numerical modelling has been used in a wide range of GPR applications including archeology, geophysics, forensic, landmine detection etc. In engineering, some applications of numerical modelling include the estimation of the effectiveness of GPR to detect voids in bridges, to detect metal bars in concrete, to estimate shielding effectiveness etc. The main challenges in numerical modelling of GPR for engineering applications are A) the implementation of the dielectric properties of the media (soils, concrete etc.) in a realistic way, B) the implementation of the geometry of the media (soils inhomogeneities, rough surface, vegetation, concrete features like fractures and rock fragments etc.) and C) the detailed modelling of the antenna units. The main focus of this work (which is part of the COST Action TU1208) is the accurate and realistic implementation of GPR antenna units into the FDTD model. Accurate models based on general characteristics of the commercial antennas GSSI 1.5 GHz and MALA 1.2 GHz have been already incorporated in GprMax, a free software which solves Maxwell's equation using a second order in space and time FDTD algorithm. This work presents the implementation of horn antennas with different parameters as well as ridged horn antennas into this FDTD model and their effectiveness is tested in realistic modelled situations. Accurate models of soils and concrete are used to test and compare different antenna units. Stochastic methods are used in order to realistically simulate the geometrical characteristics of the medium. Regarding the dielectric properties, Debye approximations are incorporated in order to simulate realistically the dielectric properties of the medium on the frequency range of interest.

  16. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  17. Developments in optical modeling methods for metrology

    NASA Astrophysics Data System (ADS)

    Davidson, Mark P.

    1999-06-01

    Despite the fact that in recent years the scanning electron microscope has come to dominate the linewidth measurement application for wafer manufacturing, there are still many applications for optical metrology and alignment. These include mask metrology, stepper alignment, and overlay metrology. Most advanced non-optical lithographic technologies are also considering using topics for alignment. In addition, there have been a number of in-situ technologies proposed which use optical measurements to control one aspect or another of the semiconductor process. So optics is definitely not dying out in the semiconductor industry. In this paper a description of recent advances in optical metrology and alignment modeling is presented. The theory of high numerical aperture image simulation for partially coherent illumination is discussed. The implications of telecentric optics on the image simulation is also presented. Reciprocity tests are proposed as an important measure of numerical accuracy. Diffraction efficiencies for chrome gratings on reticles are one good way to test Kirchoff's approximation as compared to rigorous calculations. We find significant differences between the predictions of Kirchoff's approximation and rigorous methods. The methods for simulating brightfield, confocal, and coherence probe microscope imags are outlined, as are methods for describing aberrations such as coma, spherical aberration, and illumination aperture decentering.

  18. Numerical and Experimental Study on Hydrodynamic Performance of A Novel Semi-Submersible Concept

    NASA Astrophysics Data System (ADS)

    Gao, Song; Tao, Long-bin; Kou, Yu-feng; Lu, Chao; Sun, Jiang-long

    2018-04-01

    Multiple Column Platform (MCP) semi-submersible is a newly proposed concept, which differs from the conventional semi-submersibles, featuring centre column and middle pontoon. It is paramount to ensure its structural reliability and safe operation at sea, and a rigorous investigation is conducted to examine the hydrodynamic and structural performance for the novel structure concept. In this paper, the numerical and experimental studies on the hydrodynamic performance of MCP are performed. Numerical simulations are conducted in both the frequency and time domains based on 3D potential theory. The numerical models are validated by experimental measurements obtained from extensive sets of model tests under both regular wave and irregular wave conditions. Moreover, a comparative study on MCP and two conventional semi-submersibles are carried out using numerical simulation. Specifically, the hydrodynamic characteristics, including hydrodynamic coefficients, natural periods and motion response amplitude operators (RAOs), mooring line tension are fully examined. The present study proves the feasibility of the novel MCP and demonstrates the potential possibility of optimization in the future study.

  19. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  20. Linearly first- and second-order, unconditionally energy stable schemes for the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu

    2017-02-01

    In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less

  1. SBEACH: Numerical Model for Simulating Storm-Induced Beach Change. Report 1. Empirical Foundation and Model Development

    DTIC Science & Technology

    1989-07-01

    such as the complex fluid motion over aii irregular bottom and absence of rigorous descriptions of broken waves and sediment-sediment interaction, also...prototype-scale conditions. The tests were carried out with both monochromatic and irregular waves for a dunelike foreshore with and without a...significant surf zone. For one case starting from a beach without "fore- shore," monochromatic waves produced a bar, whereas irregular waves of significant

  2. Complex dynamics of an SEIR epidemic model with saturated incidence rate and treatment

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Altaf; Khan, Yasir; Islam, Saeed

    2018-03-01

    In this paper, we describe the dynamics of an SEIR epidemic model with saturated incidence, treatment function, and optimal control. Rigorous mathematical results have been established for the model. The stability analysis of the model is investigated and found that the model is locally asymptotically stable when R0 < 1. The model is locally as well as globally asymptotically stable at endemic equilibrium when R0 > 1. The proposed model may possess a backward bifurcation. The optimal control problem is designed and obtained their necessary results. Numerical results have been presented for justification of theoretical results.

  3. Coordination and Data Management of the International Arctic Buoy Programme (IABP)

    DTIC Science & Technology

    2002-09-30

    for forcing, validation and assimilation into numerical climate models , and for forecasting weather and ice conditions. TRANSITIONS Using IABP ...Coordination and Data Management of the International Arctic Buoy Programme ( IABP ) Ignatius G. Rigor 1013 NE 40th Street Polar Science Center...analyzed geophysical fields. APPROACH The IABP is a collaboration between 25 different institutions from 8 different countries, which work together

  4. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    PubMed

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  5. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  6. Coordination, Data Management and Enhancement of the International Arctic Buoy Programme (IABP) a US Interagency Arctic Buoy Programme \\201USIABP\\202 contribution to the IABP

    DTIC Science & Technology

    2013-09-30

    data from the IABP ); 2.) Forecasting weather and sea ice conditions; 3.) Forcing, assimilation and validation of global weather and climate models ...International Arctic Buoy Programme ( IABP ) A US Interagency Arctic Buoy Programme (USIABP) contribution to the IABP Dr. Ignatius G. Rigor Polar...ice motion. These observations are assimilated into Numerical Weather Prediction (NWP) models that are used to forecast weather on synoptic time

  7. Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.

    PubMed

    Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D

    2017-02-01

    We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.

  8. The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility

    NASA Astrophysics Data System (ADS)

    Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon

    This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.

  9. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less

  10. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  11. Shear-induced opening of the coronal magnetic field

    NASA Technical Reports Server (NTRS)

    Wolfson, Richard

    1995-01-01

    This work describes the evolution of a model solar corona in response to motions of the footpoints of its magnetic field. The mathematics involved is semianalytic, with the only numerical solution being that of an ordinary differential equation. This approach, while lacking the flexibility and physical details of full MHD simulations, allows for very rapid computation along with complete and rigorous exploration of the model's implications. We find that the model coronal field bulges upward, at first slowly and then more dramatically, in response to footpoint displacements. The energy in the field rises monotonically from that of the initial potential state, and the field configuration and energy appraoch asymptotically that of a fully open field. Concurrently, electric currents develop and concentrate into a current sheet as the limiting case of the open field is approached. Examination of the equations shows rigorously that in the asymptotic limit of the fully open field, the current layer becomes a true ideal MHD singularity.

  12. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    NASA Astrophysics Data System (ADS)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  13. Well-balanced high-order solver for blood flow in networks of vessels with variable properties.

    PubMed

    Müller, Lucas O; Toro, Eleuterio F

    2013-12-01

    We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets

    NASA Astrophysics Data System (ADS)

    Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo

    2017-11-01

    Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.

  15. A simple model for indentation creep

    NASA Astrophysics Data System (ADS)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  16. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  17. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  18. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  19. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, M.D., E-mail: mdmeyers@physics.ucla.edu; Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095; Huang, C.-K., E-mail: huangck@lanl.gov

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTDmore » scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.« less

  20. On making cuts for magnetic scalar potentials in multiply connected regions

    NASA Astrophysics Data System (ADS)

    Kotiuga, P. R.

    1987-04-01

    The problem of making cuts is of importance to scalar potential formulations of three-dimensional eddy current problems. Its heuristic solution has been known for a century [J. C. Maxwell, A Treatise on Electricity and Magnetism, 3rd ed. (Clarendon, Oxford, 1981), Chap. 1, Article 20] and in the last decade, with the use of finite element methods, a restricted combinatorial variant has been proposed and solved [M. L. Brown, Int. J. Numer. Methods Eng. 20, 665 (1984)]. This problem, in its full generality, has never received a rigorous mathematical formulation. This paper presents such a formulation and outlines a rigorous proof of existence. The technique used in the proof expose the incredible intricacy of the general problem and the restrictive assumptions of Brown [Int. J. Numer. Methods Eng. 20, 665 (1984)]. Finally, the results make rigorous Kotiuga's (Ph. D. Thesis, McGill University, Montreal, 1984) heuristic interpretation of cuts and duality theorems via intersection matrices.

  1. Effective grating theory for resonance domain surface-relief diffraction gratings.

    PubMed

    Golub, Michael A; Friesem, Asher A

    2005-06-01

    An effective grating model, which generalizes effective-medium theory to the case of resonance domain surface-relief gratings, is presented. In addition to the zero order, it takes into account the first diffraction order, which obeys the Bragg condition. Modeling the surface-relief grating as an effective grating with two diffraction orders provides closed-form analytical relationships between efficiency and grating parameters. The aspect ratio, the grating period, and the required incidence angle that would lead to high diffraction efficiencies are predicted for TE and TM polarization and verified by rigorous numerical calculations.

  2. Thermochemical nonequilibrium in atomic hydrogen at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Scott, R. K.

    1972-01-01

    A numerical study of the nonequilibrium flow of atomic hydrogen in a cascade arc was performed to obtain insight into the physics of the hydrogen cascade arc. A rigorous mathematical model of the flow problem was formulated, incorporating the important nonequilibrium transport phenomena and atomic processes which occur in atomic hydrogen. Realistic boundary conditions, including consideration of the wall electrostatic sheath phenomenon, were included in the model. The governing equations of the asymptotic region of the cascade arc were obtained by writing conservation of mass and energy equations for the electron subgas, an energy conservation equation for heavy particles and an equation of state. Finite-difference operators for variable grid spacing were applied to the governing equations and the resulting system of strongly coupled, stiff equations were solved numerically by the Newton-Raphson method.

  3. Measurement and prediction of the thermomechanical response of shape memory alloy hybrid composite beams

    NASA Astrophysics Data System (ADS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-05-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  4. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  5. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  6. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  7. Tensile Properties of Dyneema SK76 Single Fibers at Multiple Loading Rates Using a Direct Gripping Method

    DTIC Science & Technology

    2014-06-01

    lower density compared with aramid fibers such as Kevlar and Twaron. Numerical modeling is used to design more effective fiber-based composite armor...in measuring fibers and doing experiments. vi INTENTIONALLY LEFT BLANK. 1 1. Introduction Aramid fibers such as Kevlar (DuPont) and Twaron...methyl methacrylate blocks. The efficacy of this method to grip Kevlar fibers has been rigorously studied using a variety of statistical methods at

  8. Comparison between PVI2D and Abreu–Johnson’s Model for Petroleum Vapor Intrusion Assessment

    PubMed Central

    Yao, Yijun; Wang, Yue; Verginelli, Iason; Suuberg, Eric M.; Ye, Jianfeng

    2018-01-01

    Recently, we have developed a two-dimensional analytical petroleum vapor intrusion model, PVI2D (petroleum vapor intrusion, two-dimensional), which can help users to easily visualize soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, reaction rate constant, soil characteristics, and building features. In this study, we made a full comparison of the results returned by PVI2D and those obtained using Abreu and Johnson’s three-dimensional numerical model (AJM). These comparisons, examined as a function of the source strength, source depth, and reaction rate constant, show that PVI2D can provide similar soil gas concentration profiles and source-to-indoor air attenuation factors (within one order of magnitude difference) as those by the AJM. The differences between the two models can be ascribed to some simplifying assumptions used in PVI2D and to some numerical limitations of the AJM in simulating strictly piecewise aerobic biodegradation and no-flux boundary conditions. Overall, the obtained results show that for cases involving homogenous source and soil, PVI2D can represent a valid alternative to more rigorous three-dimensional numerical models. PMID:29398981

  9. Estimation of the breaking of rigor mortis by myotonometry.

    PubMed

    Vain, A; Kauppila, R; Vuori, E

    1996-05-31

    Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.

  10. The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers

    NASA Astrophysics Data System (ADS)

    He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da

    2016-01-01

    The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.

  11. Analysis of Well-Clear Boundary Models for the Integration of UAS in the NAS

    NASA Technical Reports Server (NTRS)

    Upchurch, Jason M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Chamberlain, James P.; Consiglio, Maria C.

    2014-01-01

    The FAA-sponsored Sense and Avoid Workshop for Unmanned Aircraft Systems (UAS) defnes the concept of sense and avoid for remote pilots as "the capability of a UAS to remain well clear from and avoid collisions with other airborne traffic." Hence, a rigorous definition of well clear is fundamental to any separation assurance concept for the integration of UAS into civil airspace. This paper presents a family of well-clear boundary models based on the TCAS II Resolution Advisory logic. Analytical techniques are used to study the properties and relationships satisfied by the models. Some of these properties are numerically quantifed using statistical methods.

  12. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  13. A surface spherical harmonic expansion of gravity anomalies on the ellipsoid

    NASA Astrophysics Data System (ADS)

    Claessens, S. J.; Hirt, C.

    2015-10-01

    A surface spherical harmonic expansion of gravity anomalies with respect to a geodetic reference ellipsoid can be used to model the global gravity field and reveal its spectral properties. In this paper, a direct and rigorous transformation between solid spherical harmonic coefficients of the Earth's disturbing potential and surface spherical harmonic coefficients of gravity anomalies in ellipsoidal approximation with respect to a reference ellipsoid is derived. This transformation cannot rigorously be achieved by the Hotine-Jekeli transformation between spherical and ellipsoidal harmonic coefficients. The method derived here is used to create a surface spherical harmonic model of gravity anomalies with respect to the GRS80 ellipsoid from the EGM2008 global gravity model. Internal validation of the model shows a global RMS precision of 1 nGal. This is significantly more precise than previous solutions based on spherical approximation or approximations to order or , which are shown to be insufficient for the generation of surface spherical harmonic coefficients with respect to a geodetic reference ellipsoid. Numerical results of two applications of the new method (the computation of ellipsoidal corrections to gravimetric geoid computation, and area means of gravity anomalies in ellipsoidal approximation) are provided.

  14. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less

  16. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  17. Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yiguang Ju; Frederick Dryer

    2009-02-07

    Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.

  18. Accuracy and performance of 3D mask models in optical projection lithography

    NASA Astrophysics Data System (ADS)

    Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar

    2011-04-01

    Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.

  19. Coordination, Data Management and Enhancement of the International Arctic Buoy Programme (IABP), A US Interagency Arctic Buoy Programme (USIABP) Contribution to the IABP

    DTIC Science & Technology

    2012-09-30

    International Arctic Buoy Programme ( IABP ) A US Interagency Arctic Buoy Programme (USIABP) contribution to the IABP Dr. Ignatius G. Rigor Polar...observations of surface meteorology and ice motion. These observations are assimilated into Numerical Weather Prediction (NWP) models that are used to...distribution of sea ice. Over the Arctic Ocean, this fundamental observing network is maintained by the IABP , and is a critical component of the

  20. A rigorous approach to the formulation of extended Born-Oppenheimer equation for a three-state system

    NASA Astrophysics Data System (ADS)

    Sarkar, Biplab; Adhikari, Satrajit

    If a coupled three-state electronic manifold forms a sub-Hilbert space, it is possible to express the non-adiabatic coupling (NAC) elements in terms of adiabatic-diabatic transformation (ADT) angles. Consequently, we demonstrate: (a) Those explicit forms of the NAC terms satisfy the Curl conditions with non-zero Divergences; (b) The formulation of extended Born-Oppenheimer (EBO) equation for any three-state BO system is possible only when there exists coordinate independent ratio of the gradients for each pair of ADT angles leading to zero Curls at and around the conical intersection(s). With these analytic advancements, we formulate a rigorous EBO equation and explore its validity as well as necessity with respect to the approximate one (Sarkar and Adhikari, J Chem Phys 2006, 124, 074101) by performing numerical calculations on two different models constructed with different chosen forms of the NAC elements.

  1. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  2. Light scattering and absorption by space weathered planetary bodies: Novel numerical solution

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti; Muinonen, Karri

    2017-10-01

    Airless planetary bodies are exposed to space weathering, i.e., energetic electromagnetic and particle radiation, implantation and sputtering from solar wind particles, and micrometeorite bombardment.Space weathering is known to alter the physical and chemical composition of the surface of an airless body (C. Pieters et al., J. Geophys. Res. Planets, 121, 2016). From the light scattering perspective, one of the key effects is the production of nanophase iron (npFe0) near the exposed surfaces (B. Hapke, J. Geophys. Res., 106, E5, 2001). At visible and ultraviolet wavelengths these particles have a strong electromagnetic response which has a major impact on scattering and absorption features. Thus, to interpret the spectroscopic observations of space-weathered asteroids, the model should treat the contributions of the npFe0 particles rigorously.Our numerical approach is based on the hierarchical geometric optics (GO) and radiative transfer (RT). The modelled asteroid is assumed to consist of densely packed silicate grains with npFe0 inclusions. We employ our recently developed RT method for dense random media (K. Muinonen, et al., Radio Science, submitted, 2017) to compute the contributions of the npFe0 particles embedded in silicate grains. The dense media RT method requires computing interactions of the npFe0 particles in the volume element for which we use the exact fast superposition T-matrix method (J. Markkanen, and A.J. Yuffa, JQSRT 189, 2017). Reflections and refractions on the grain surface and propagation in the grain are addressed by the GO. Finally, the standard RT is applied to compute scattering by the entire asteroid.Our numerical method allows for a quantitative interpretation of the spectroscopic observations of space-weathered asteroids. In addition, it may be an important step towards more rigorous a thermophysical model of asteroids when coupled with the radiative and conductive heat transfer techniques.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL. Computational resources provided by CSC- IT Centre for Science Ltd, Finland.

  3. Parallel numerical modeling of hybrid-dimensional compositional non-isothermal Darcy flows in fractured porous media

    NASA Astrophysics Data System (ADS)

    Xing, F.; Masson, R.; Lopez, S.

    2017-09-01

    This paper introduces a new discrete fracture model accounting for non-isothermal compositional multiphase Darcy flows and complex networks of fractures with intersecting, immersed and non-immersed fractures. The so called hybrid-dimensional model using a 2D model in the fractures coupled with a 3D model in the matrix is first derived rigorously starting from the equi-dimensional matrix fracture model. Then, it is discretized using a fully implicit time integration combined with the Vertex Approximate Gradient (VAG) finite volume scheme which is adapted to polyhedral meshes and anisotropic heterogeneous media. The fully coupled systems are assembled and solved in parallel using the Single Program Multiple Data (SPMD) paradigm with one layer of ghost cells. This strategy allows for a local assembly of the discrete systems. An efficient preconditioner is implemented to solve the linear systems at each time step and each Newton type iteration of the simulation. The numerical efficiency of our approach is assessed on different meshes, fracture networks, and physical settings in terms of parallel scalability, nonlinear convergence and linear convergence.

  4. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  5. The evolution of stable magnetic fields in stars: an analytical approach

    NASA Astrophysics Data System (ADS)

    Mestel, Leon; Moss, David

    2010-07-01

    The absence of a rigorous proof of the existence of dynamically stable, large-scale magnetic fields in radiative stars has been for many years a missing element in the fossil field theory for the magnetic Ap/Bp stars. Recent numerical simulations, by Braithwaite & Spruit and Braithwaite & Nordlund, have largely filled this gap, demonstrating convincingly that coherent global scale fields can survive for times of the order of the main-sequence lifetimes of A stars. These dynamically stable configurations take the form of magnetic tori, with linked poloidal and toroidal fields, that slowly rise towards the stellar surface. This paper studies a simple analytical model of such a torus, designed to elucidate the physical processes that govern its evolution. It is found that one-dimensional numerical calculations reproduce some key features of the numerical simulations, with radiative heat transfer, Archimedes' principle, Lorentz force and Ohmic decay all playing significant roles.

  6. Rigorous investigation of the reduced density matrix for the ideal Bose gas in harmonic traps by a loop-gas-like approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beau, Mathieu, E-mail: mbeau@stp.dias.ie; Savoie, Baptiste, E-mail: baptiste.savoie@gmail.com

    2014-05-15

    In this paper, we rigorously investigate the reduced density matrix (RDM) associated to the ideal Bose gas in harmonic traps. We present a method based on a sum-decomposition of the RDM allowing to treat not only the isotropic trap, but also general anisotropic traps. When focusing on the isotropic trap, the method is analogous to the loop-gas approach developed by Mullin [“The loop-gas approach to Bose-Einstein condensation for trapped particles,” Am. J. Phys. 68(2), 120 (2000)]. Turning to the case of anisotropic traps, we examine the RDM for some anisotropic trap models corresponding to some quasi-1D and quasi-2D regimes. Formore » such models, we bring out an additional contribution in the local density of particles which arises from the mesoscopic loops. The close connection with the occurrence of generalized-Bose-Einstein condensation is discussed. Our loop-gas-like approach provides relevant information which can help guide numerical investigations on highly anisotropic systems based on the Path Integral Monte Carlo method.« less

  7. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less

  8. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimus, Paul William

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examplesmore » of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.« less

  9. Steady-state and dynamic models for particle engulfment during solidification

    NASA Astrophysics Data System (ADS)

    Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.

    2016-06-01

    Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.

  10. Regularized variational theories of fracture: A unified approach

    NASA Astrophysics Data System (ADS)

    Freddi, Francesco; Royer-Carfagni, Gianni

    2010-08-01

    The fracture pattern in stressed bodies is defined through the minimization of a two-field pseudo-spatial-dependent functional, with a structure similar to that proposed by Bourdin-Francfort-Marigo (2000) as a regularized approximation of a parent free-discontinuity problem, but now considered as an autonomous model per se. Here, this formulation is altered by combining it with structured deformation theory, to model that when the material microstructure is loosened and damaged, peculiar inelastic (structured) deformations may occur in the representative volume element at the price of surface energy consumption. This approach unifies various theories of failure because, by simply varying the form of the class for admissible structured deformations, different-in-type responses can be captured, incorporating the idea of cleavage, deviatoric, combined cleavage-deviatoric and masonry-like fractures. Remarkably, this latter formulation rigorously avoid material overlapping in the cracked zones. The model is numerically implemented using a standard finite-element discretization and adopts an alternate minimization algorithm, adding an inequality constraint to impose crack irreversibility ( fixed crack model). Numerical experiments for some paradigmatic examples are presented and compared for various possible versions of the model.

  11. Predictability of the geospace variations and measuring the capability to model the state of the system

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A.

    2012-12-01

    Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble modeling (e.g. taking into account uncertainty in the inflow boundary conditions).

  12. Deffuant model of opinion formation in one-dimensional multiplex networks

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2015-10-01

    Complex systems in the real world often operate through multiple kinds of links connecting their constituents. In this paper we propose an opinion formation model under bounded confidence over multiplex networks, consisting of edges at different topological and temporal scales. We determine rigorously the critical confidence threshold by exploiting probability theory and network science when the nodes are arranged on the integers, {{Z}}, evolving in continuous time. It is found that the existence of ‘multiplexity’ impedes the convergence, and that working with the aggregated or summarized simplex network is inaccurate since it misses vital information. Analytical calculations are confirmed by extensive numerical simulations.

  13. Explicit formulation of second and third order optical nonlinearity in the FDTD framework

    NASA Astrophysics Data System (ADS)

    Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas

    2018-01-01

    The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.

  14. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  15. Probabilistic Space Weather Forecasting: a Bayesian Perspective

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Chandorkar, M.; Borovsky, J.; Care', A.

    2017-12-01

    Most of the Space Weather forecasts, both at operational and research level, are not probabilistic in nature. Unfortunately, a prediction that does not provide a confidence level is not very useful in a decision-making scenario. Nowadays, forecast models range from purely data-driven, machine learning algorithms, to physics-based approximation of first-principle equations (and everything that sits in between). Uncertainties pervade all such models, at every level: from the raw data to finite-precision implementation of numerical methods. The most rigorous way of quantifying the propagation of uncertainties is by embracing a Bayesian probabilistic approach. One of the simplest and most robust machine learning technique in the Bayesian framework is Gaussian Process regression and classification. Here, we present the application of Gaussian Processes to the problems of the DST geomagnetic index forecast, the solar wind type classification, and the estimation of diffusion parameters in radiation belt modeling. In each of these very diverse problems, the GP approach rigorously provide forecasts in the form of predictive distributions. In turn, these distributions can be used as input for ensemble simulations in order to quantify the amplification of uncertainties. We show that we have achieved excellent results in all of the standard metrics to evaluate our models, with very modest computational cost.

  16. Energy Landscape of Social Balance

    NASA Astrophysics Data System (ADS)

    Marvel, Seth A.; Strogatz, Steven H.; Kleinberg, Jon M.

    2009-11-01

    We model a close-knit community of friends and enemies as a fully connected network with positive and negative signs on its edges. Theories from social psychology suggest that certain sign patterns are more stable than others. This notion of social “balance” allows us to define an energy landscape for such networks. Its structure is complex: numerical experiments reveal a landscape dimpled with local minima of widely varying energy levels. We derive rigorous bounds on the energies of these local minima and prove that they have a modular structure that can be used to classify them.

  17. Cymatics for the cloaking of flexural vibrations in a structured plate

    PubMed Central

    Misseroni, D.; Colquitt, D. J.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.

    2016-01-01

    Based on rigorous theoretical findings, we present a proof-of-concept design for a structured square cloak enclosing a void in an elastic lattice. We implement high-precision fabrication and experimental testing of an elastic invisibility cloak for flexural waves in a mechanical lattice. This is accompanied by verifications and numerical modelling performed through finite element simulations. The primary advantage of our square lattice cloak, over other designs, is the straightforward implementation and the ease of construction. The elastic lattice cloak, implemented experimentally, shows high efficiency. PMID:27068339

  18. Spontaneous oscillations in microfluidic networks

    NASA Astrophysics Data System (ADS)

    Case, Daniel; Angilella, Jean-Regis; Motter, Adilson

    2017-11-01

    Precisely controlling flows within microfluidic systems is often difficult which typically results in systems being heavily reliant on numerous external pumps and computers. Here, I present a simple microfluidic network that exhibits flow rate switching, bistablity, and spontaneous oscillations controlled by a single pressure. That is, by solely changing the driving pressure, it is possible to switch between an oscillating and steady flow state. Such functionality does not rely on external hardware and may even serve as an on-chip memory or timing mechanism. I use an analytic model and rigorous fluid dynamics simulations to show these results.

  19. Energy landscape of social balance.

    PubMed

    Marvel, Seth A; Strogatz, Steven H; Kleinberg, Jon M

    2009-11-06

    We model a close-knit community of friends and enemies as a fully connected network with positive and negative signs on its edges. Theories from social psychology suggest that certain sign patterns are more stable than others. This notion of social "balance" allows us to define an energy landscape for such networks. Its structure is complex: numerical experiments reveal a landscape dimpled with local minima of widely varying energy levels. We derive rigorous bounds on the energies of these local minima and prove that they have a modular structure that can be used to classify them.

  20. Bifurcations in a discrete time model composed of Beverton-Holt function and Ricker function.

    PubMed

    Shang, Jin; Li, Bingtuan; Barnard, Michael R

    2015-05-01

    We provide rigorous analysis for a discrete-time model composed of the Ricker function and Beverton-Holt function. This model was proposed by Lewis and Li [Bull. Math. Biol. 74 (2012) 2383-2402] in the study of a population in which reproduction occurs at a discrete instant of time whereas death and competition take place continuously during the season. We show analytically that there exists a period-doubling bifurcation curve in the model. The bifurcation curve divides the parameter space into the region of stability and the region of instability. We demonstrate through numerical bifurcation diagrams that the regions of periodic cycles are intermixed with the regions of chaos. We also study the global stability of the model. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Numerical Analysis of the Dynamics of Nonlinear Solids and Structures

    DTIC Science & Technology

    2008-08-01

    to arrive to a new numerical scheme that exhibits rigorously the dissipative character of the so-called canonical free en - ergy characteristic of...UCLA), February 14 2006. 5. "Numerical Integration of the Nonlinear Dynamics of Elastoplastic Solids," keynote lecture , 3rd European Conference on...Computational Mechanics (ECCM 3), Lisbon, Portugal, June 5-9 2006. 6. "Energy-Momentum Schemes for Finite Strain Plasticity," keynote lecture , 7th

  2. Characterization of anisotropically shaped silver nanoparticle arrays via spectroscopic ellipsometry supported by numerical optical modeling

    NASA Astrophysics Data System (ADS)

    Gkogkou, Dimitra; Shaykhutdinov, Timur; Oates, Thomas W. H.; Gernert, Ulrich; Schreiber, Benjamin; Facsko, Stefan; Hildebrandt, Peter; Weidinger, Inez M.; Esser, Norbert; Hinrichs, Karsten

    2017-11-01

    The present investigation aims to study the optical response of anisotropic Ag nanoparticle arrays deposited on rippled silicon substrates by performing a qualitative comparison between experimental and theoretical results. Spectroscopic ellipsometry was used along with numerical calculations using finite-difference time-domain (FDTD) method and rigorous coupled wave analysis (RCWA) to reveal trends in the optical and geometrical properties of the nanoparticle array. Ellipsometric data show two resonances, in the orthogonal x and y directions, that originate from localized plasmon resonances as demonstrated by the calculated near-fields from FDTD calculations. The far-field calculations by RCWA point to decoupled resonances in x direction and possible coupling effects in y direction, corresponding to the short and long axis of the anisotropic nanoparticles, respectively.

  3. From model conception to verification and validation, a global approach to multiphase Navier-Stoke models with an emphasis on volcanic explosive phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dartevelle, Sebastian

    2007-10-01

    Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less

  4. Model of dissolution in the framework of tissue engineering and drug delivery.

    PubMed

    Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R

    2018-05-22

    Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.

  5. Phenomenological Models and Animations of Welding and their Impact

    NASA Astrophysics Data System (ADS)

    DebRoy, Tarasankar

    Professor Robertson's recognized research on metallurgical thermodynamics and kinetics for over 40 years facilitated the emergence of rigorous quantitative understanding of many complex metallurgical processes. The author had the opportunity to work with Professor Robertson on liquid metals in the 1970s. This paper is intended to review the advances in the quantitative understanding of welding processes and weld metal attributes in recent decades. Over this period, phenomenological models have been developed to better understand and control various welding processes and the structure and properties of welded materials. Numerical models and animations of melting, solidification and the evolution of micro and macro-structural features will be presented to critically examine their impact on the practice of welding and the underlying science.

  6. The International Arctic Buoy Programme (IABP)

    NASA Astrophysics Data System (ADS)

    Rigor, I. G.; Ortmeyer, M.

    2003-12-01

    The Arctic has undergone dramatic changes in weather, climate and environment. It should be noted that many of these changes were first observed and studied using data from the International Arctic Buoy Programme (IABP). For example, IABP data were fundamental to Walsh et al. (1996) showing that atmospheric pressure has decreased, Rigor et al. (2000) showing that air temperatures have increased, and to Proshutinsky and Johnson (1997); Steele and Boyd, (1998); Kwok, (2000); and Rigor et al. (2002) showing that the clockwise circulation of sea ice and the ocean has weakened. All these results relied heavily on data from the IABP. In addition to supporting these studies of climate change, the IABP observations are also used to forecast weather and ice conditions, validate satellite retrievals of environmental variables, to force, validate and initialize numerical models. Over 350 papers have been written using data from the IABP. The observations and datasets of the IABP data are one of the cornerstones for environmental forecasting and research in the Arctic.

  7. Rigorous vector wave propagation for arbitrary flat media

    NASA Astrophysics Data System (ADS)

    Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.

    2017-08-01

    Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.

  8. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    NASA Astrophysics Data System (ADS)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  9. On the relation between phase-field crack approximation and gradient damage modelling

    NASA Astrophysics Data System (ADS)

    Steinke, Christian; Zreid, Imadeddin; Kaliske, Michael

    2017-05-01

    The finite element implementation of a gradient enhanced microplane damage model is compared to a phase-field model for brittle fracture. Phase-field models and implicit gradient damage models share many similarities despite being conceived from very different standpoints. In both approaches, an additional differential equation and a length scale are introduced. However, while the phase-field method is formulated starting from the description of a crack in fracture mechanics, the gradient method starts from a continuum mechanics point of view. At first, the scope of application for both models is discussed to point out intersections. Then, the analysis of the employed mathematical methods and their rigorous comparison are presented. Finally, numerical examples are introduced to illustrate the findings of the comparison which are summarized in a conclusion at the end of the paper.

  10. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  11. Geomorphology and landscape organization of a northern peatland complex

    NASA Astrophysics Data System (ADS)

    Richardson, M. C.

    2012-12-01

    The geomorphic evolution of northern peatlands is governed by complex ecohydrological feedback mechanisms and associated hydro-climatic drivers. For example, prevailing models of bog development (i.e. Ingram's groundwater mounding hypothesis and variants) attempt to explicitly link bog dome characteristics to the regional climate based on analytical and numerical models of lateral groundwater flow and the first-order control of water table position on rates of peat accumulation. In this talk I will present new results from quantitative geomorphic analyses of a northern peatland complex at the De Beers Victor diamond mine site in the Hudson Bay Lowlands of northern Ontario. This work capitalizes on spatially-extensive, high-resolution topographic (LiDAR) data to rigorously test analytical and numerical models of bog dome development in this landscape. The analysis and discussion are then expanded beyond individual bog formations to more broadly consider ecohydrological drivers of landscape organization, with implications for understanding and modeling catchment-scale runoff response. Results show that in this landscape, drainage patterns exhibit relatively well-organized characteristics consistent with observed runoff responses in six gauged research catchments. Interpreted together, the results of these geomorphic and hydrologic analyses help refine our understanding of water balance partitioning among different landcover types within northern peatland complexes. These findings can be used to help guide the development of appropriate numerical model structures for hydrologic prediction in ungauged peatland basins of northern Canada.

  12. MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.

    PubMed

    Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan

    2017-01-01

    Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.

  13. Van Driest transformation and compressible wall-bounded flows

    NASA Technical Reports Server (NTRS)

    Huang, P. G.; Coleman, G. N.

    1994-01-01

    The transformation validity question utilizing resulting data from direct numerical simulations (DNS) of supersonic, isothermal cold wall channel flow was investigated. The DNS results stood for a wide scope of parameter and were suitable for the purpose of examining the generality of Van Driest transformation. The Van Driest law of the wall can be obtained from the inner-layer similarity arguments. It was demonstrated that the Van Driest transformation cannot be incorporated to collapse the sublayer and log-layer velocity profiles simultaneously. Velocity and temperature predictions according to the preceding composite mixing-length model were presented. Despite satisfactory congruity with the DNS data, the model must be perceived as an engineering guide and not as a rigorous analysis.

  14. Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang

    2014-08-01

    We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.

  15. Approaching the investigation of plasma turbulence through a rigorous verification and validation procedure: A practical example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.

    In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less

  16. The MINERVA Software Development Process

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.

    2017-01-01

    This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.

  17. Numerical simulation of crevice corrosion of titanium: Effect of the bold surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evitts, R.W.; Postlethwaite, J.; Watson, M.K.

    1996-12-01

    A rigorous crevice corrosion model has been developed that accounts for the bold metal surfaces exterior to the crevice. The model predicts the time change in concentration of all specified chemical species in the crevice and bulk solution, and has the ability to predict active corrosion. It is applied to the crevice corrosion of a small titanium crevice in both oxygenated and anaerobic sodium chloride solutions. The numerical predictions confirm that oxygen is the driving force for crevice corrosion. During the simulations where oxygen is initially present in both the crevice and bulk solution an acidic chloride solution is developed;more » this is the precursor required for crevice corrosion. The anaerobic case displays no tendency to form such a solution. It is also confirmed that those areas in the crevice that are deoxygenated become anodic and the bold metal surface becomes cathodic. As expected, active corrosion is not attained as the simulations are based on electrochemical and chemical parameters at 25 C.« less

  18. Diffraction efficiency calculations of polarization diffraction gratings with surface relief

    NASA Astrophysics Data System (ADS)

    Nazarova, D.; Sharlandjiev, P.; Berberova, N.; Blagoeva, B.; Stoykova, E.; Nedelchev, L.

    2018-03-01

    In this paper, we evaluate the optical response of a stack of two diffraction gratings of equal one-dimensional periodicity. The first one is a surface-relief grating structure; the second, a volume polarization grating. This model is based on our experimental results from polarization holographic recordings in azopolymer films. We used films of commercially available azopolymer (poly[1-[4-(3-carboxy-4-hydroxyphenylazo) benzenesulfonamido]-1,2-ethanediyl, sodium salt]), shortly denoted as PAZO. During the recording process, a polarization grating in the volume of the material and a relief grating on the film surface are formed simultaneously. In order to evaluate numerically the optical response of this “hybrid” diffraction structure, we used the rigorous coupled-wave approach (RCWA). It yields stable numerical solutions of Maxwell’s vector equations using the algebraic eigenvalue method.

  19. Meso-beta scale numerical simulation studies of terrain-induced jet streak mass/momentum perturbations

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1994-01-01

    An in-depth analysis of observed gravity waves and their relationship to precipitation bands over the Montana mesonetwork during the 1981 CCOPE case study indicates that there were two episodes of coherent internal gravity waves. One of the fundamental unanswered questions from this research, however, concerns the dynamical processes which generated the observed waves, all of which originated from the region encompassing the borders of Montana, Idaho, and Wyoming. While geostrophic adjustment, shearing instability, and terrain where all implicated separately or in concert as possible wave generation mechanisms, the lack of upper-air data within the wave genesis region made it difficult to rigorously define the genesis processes from observations alone. In this report we employ a mesoscale numerical model to help diagnose the intricate early wave generation mechanisms during the first observed wave episode.

  20. Internal field distribution of a radially inhomogeneous droplet illuminated by an arbitrary shaped beam

    NASA Astrophysics Data System (ADS)

    Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang

    2018-05-01

    Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.

  1. Fully vectorial laser resonator modeling of continuous-wave solid-state lasers including rate equations, thermal lensing and stress-induced birefringence.

    PubMed

    Asoubar, Daniel; Wyrowski, Frank

    2015-07-27

    The computer-aided design of high quality mono-mode, continuous-wave solid-state lasers requires fast, flexible and accurate simulation algorithms. Therefore in this work a model for the calculation of the transversal dominant mode structure is introduced. It is based on the generalization of the scalar Fox and Li algorithm to a fully-vectorial light representation. To provide a flexible modeling concept of different resonator geometries containing various optical elements, rigorous and approximative solutions of Maxwell's equations are combined in different subdomains of the resonator. This approach allows the simulation of plenty of different passive intracavity components as well as active media. For the numerically efficient simulation of nonlinear gain, thermal lensing and stress-induced birefringence effects in solid-state active crystals a semi-analytical vectorial beam propagation method is discussed in detail. As a numerical example the beam quality and output power of a flash-lamp-pumped Nd:YAG laser are improved. To that end we compensate the influence of stress-induced birefringence and thermal lensing by an aspherical mirror and a 90° quartz polarization rotator.

  2. Highly efficient all-dielectric optical tensor impedance metasurfaces for chiral polarization control.

    PubMed

    Kim, Minseok; Eleftheriades, George V

    2016-10-15

    We propose a highly efficient (nearly lossless and impedance-matched) all-dielectric optical tensor impedance metasurface that mimics chiral effects at optical wavelengths. By cascading an array of rotated crossed silicon nanoblocks, we realize chiral optical tensor impedance metasurfaces that operate as circular polarization selective surfaces. Their efficiencies are maximized through a nonlinear numerical optimization process in which the tensor impedance metasurfaces are modeled via multi-conductor transmission line theory. From rigorous full-wave simulations that include all material losses, we show field transmission efficiencies of 94% for right- and left-handed circular polarization selective surfaces at 800 nm.

  3. Mathematical Analysis of a Coarsening Model with Local Interactions

    NASA Astrophysics Data System (ADS)

    Helmers, Michael; Niethammer, Barbara; Velázquez, Juan J. L.

    2016-10-01

    We consider particles on a one-dimensional lattice whose evolution is governed by nearest-neighbor interactions where particles that have reached size zero are removed from the system. Concentrating on configurations with infinitely many particles, we prove existence of solutions under a reasonable density assumption on the initial data and show that the vanishing of particles and the localized interactions can lead to non-uniqueness. Moreover, we provide a rigorous upper coarsening estimate and discuss generic statistical properties as well as some non-generic behavior of the evolution by means of heuristic arguments and numerical observations.

  4. Existence and Stability of Viscoelastic Shock Profiles

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Lewicka, Marta; Zumbrun, Kevin

    2011-05-01

    We investigate existence and stability of viscoelastic shock profiles for a class of planar models including the incompressible shear case studied by Antman and Malek-Madani. We establish that the resulting equations fall into the class of symmetrizable hyperbolic-parabolic systems, hence spectral stability implies linearized and nonlinear stability with sharp rates of decay. The new contributions are treatment of the compressible case, formulation of a rigorous nonlinear stability theory, including verification of stability of small-amplitude Lax shocks, and the systematic incorporation in our investigations of numerical Evans function computations determining stability of large-amplitude and nonclassical type shock profiles.

  5. The Osher scheme for real gases

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1990-01-01

    An extension of Osher's approximate Riemann solver to include gases with an arbitrary equation of state is presented. By a judicious choice of thermodynamic variables, the Riemann invariats are reduced to quadratures which are then approximated numerically. The extension is rigorous and does not involve any further assumptions or approximations over the ideal gas case. Numerical results are presented to demonstrate the feasibility and accuracy of the proposed method.

  6. Community-wide Validation of Geospace Model Ground Magnetic Field Perturbation Predictions to Support Model Transition to Operations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Rastaetter, L.; Kuznetsova, M.; Singer, H.; Balch, C.; Weimer, D.; Toth, G.; Ridley, A.; Gombosi, T.; Wiltberger, M.; hide

    2013-01-01

    In this paper we continue the community-wide rigorous modern space weather model validation efforts carried out within GEM, CEDAR and SHINE programs. In this particular effort, in coordination among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), modelers, and science community, we focus on studying the models' capability to reproduce observed ground magnetic field fluctuations, which are closely related to geomagnetically induced current phenomenon. One of the primary motivations of the work is to support NOAA SWPC in their selection of the next numerical model that will be transitioned into operations. Six geomagnetic events and 12 geomagnetic observatories were selected for validation.While modeled and observed magnetic field time series are available for all 12 stations, the primary metrics analysis is based on six stations that were selected to represent the high-latitude and mid-latitude locations. Events-based analysis and the corresponding contingency tables were built for each event and each station. The elements in the contingency table were then used to calculate Probability of Detection (POD), Probability of False Detection (POFD) and Heidke Skill Score (HSS) for rigorous quantification of the models' performance. In this paper the summary results of the metrics analyses are reported in terms of POD, POFD and HSS. More detailed analyses can be carried out using the event by event contingency tables provided as an online appendix. An online interface built at CCMC and described in the supporting information is also available for more detailed time series analyses.

  7. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  8. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  9. Nonlinear analysis of a model of vascular tumour growth and treatment

    NASA Astrophysics Data System (ADS)

    Tao, Youshan; Yoshida, Norio; Guo, Qian

    2004-05-01

    We consider a mathematical model describing the evolution of a vascular tumour in response to traditional chemotherapy. The model is a free boundary problem for a system of partial differential equations governing intratumoural drug concentration, cancer cell density and blood vessel density. Tumour cells consist of two types of competitive cells that have different proliferation rates and different sensitivities to drugs. The balance between cell proliferation and death generates a velocity field that drives tumour cell movement. The tumour surface is a moving boundary. The purpose of this paper is to establish a rigorous mathematical analysis of the model for studying the dynamics of intratumoural blood vessels and to explore drug dosage for the successful treatment of a tumour. We also study numerically the competitive effects of the two cell types on tumour growth.

  10. Resonant Spectra of Malignant Breast Cancer Tumors Using the Three-Dimensional Electromagnetic Fast Multipole Model. Part 1

    NASA Technical Reports Server (NTRS)

    El-Shenawee, Magda

    2003-01-01

    An intensive numerical study for the resonance scattering of malignant breast cancer tumors is presented. The rigorous three-dimensional electromagnetic model, based on the equivalence theorem, is used to obtain the induced electric and magnetic currents on the breast and tumor surfaces. The results show that a non-spherical malignant tumor can be characterized based its spectra regardless of its orientation, the incident polarization, or the incident or scattered directions. The tumor's spectra depend solely on its physical characteristics (i.e., the shape and the electrical properties), however, their locations are not functions of its burial depth. This work provides a useful guidance to select the appropriate frequency range for the tumor's size.

  11. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  12. Near infrared spectroscopy as an on-line method to quantitatively determine glycogen and predict ultimate pH in pre rigor bovine M. longissimus dorsi.

    PubMed

    Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M

    2010-12-01

    The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.

  13. Normalization, bias correction, and peak calling for ChIP-seq

    PubMed Central

    Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.

    2012-01-01

    Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706

  14. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

    2016-04-01

    Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

  15. A Flux-Corrected Transport Based Hydrodynamic Model for the Plasmasphere Refilling Problem following Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chatterjee, K.; Schunk, R. W.

    2017-12-01

    The refilling of the plasmasphere following a geomagnetic storm remains one of the longstanding problems in the area of ionosphere-magnetosphere coupling. Both diffusion and hydrodynamic approximations have been adopted for the modeling and solution of this problem. The diffusion approximation neglects the nonlinear inertial term in the momentum equation and so this approximation is not rigorously valid immediately after the storm. Over the last few years, we have developed a hydrodynamic refilling model using the flux-corrected transport method, a numerical method that is extremely well suited to handling nonlinear problems with shocks and discontinuities. The plasma transport equations are solved along 1D closed magnetic field lines that connect conjugate ionospheres and the model currently includes three ion (H+, O+, He+) and two neutral (O, H) species. In this work, each ion species under consideration has been modeled as two separate streams emanating from the conjugate hemispheres and the model correctly predicts supersonic ion speeds and the presence of high levels of Helium during the early hours of refilling. The ultimate objective of this research is the development of a 3D model for the plasmasphere refilling problem and with additional development, the same methodology can potentially be applied to the study of other complex space plasma coupling problems in closed flux tube geometries. Index Terms: 2447 Modeling and forecasting [IONOSPHERE] 2753 Numerical modeling [MAGNETOSPHERIC PHYSICS] 7959 Models [SPACE WEATHER

  16. Rigorous Electromagnetic Analysis of the Focusing Action of Refractive Cylindrical Microlens

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Gu, Ben-Yuan; Dong, Bi-Zhen; Yang, Guo-Zhen

    The focusing action of refractive cylindrical microlens is investigated based on the rigorous electromagnetic theory with the use of the boundary element method. The focusing behaviors of these refractive microlenses with continuous and multilevel surface-envelope are characterized in terms of total electric-field patterns, the electric-field intensity distributions on the focal plane, and their diffractive efficiencies at the focal spots. The obtained results are also compared with the ones obtained by Kirchhoff's scalar diffraction theory. The present numerical and graphical results may provide useful information for the analysis and design of refractive elements in micro-optics.

  17. Necromechanics: Death-induced changes in the mechanical properties of human tissues.

    PubMed

    Martins, Pedro A L S; Ferreira, Francisca; Natal Jorge, Renato; Parente, Marco; Santos, Agostinho

    2015-05-01

    After the death phenomenon, the rigor mortis development, characterized by body stiffening, is one of the most evident changes that occur in the body. In this work, the development of rigor mortis was assessed using a skinfold caliper in human cadavers and in live people to measure the deformation in the biceps brachii muscle in response to the force applied by the device. Additionally, to simulate the measurements with the finite element method, a two-dimensional model of an arm section was used. As a result of the experimental procedure, a decrease in deformation with increasing postmortem time was observed, which corresponds to an increase in rigidity. As expected, the deformations for the live subjects were higher. The finite element method analysis showed a correlation between the c1 parameter of the neo-Hookean model in the 4- to 8-h postmortem interval. This was accomplished by adjusting the c1 material parameter in order to simulate the measured experimental displacement. Despite being a preliminary study, the obtained results show that combining the proposed experimental procedure with a numerical technique can be very useful in the study of the postmortem mechanical modifications of human tissues. Moreover, the use of data from living subjects allows us to estimate the time of death paving the way to establish this process as an alternative to the existing techniques. This solution constitutes a portable, non-invasive method of estimating the postmortem interval with direct quantitative measurements using a skinfold caliper. The tools and methods described can be used to investigate the subject and to gain epidemiologic knowledge on rigor mortis phenomenon. © IMechE 2015.

  18. Boundary-layer effects in composite laminates: Free-edge stress singularities, part 6

    NASA Technical Reports Server (NTRS)

    Wanag, S. S.; Choi, I.

    1981-01-01

    A rigorous mathematical model was obtained for the boundary-layer free-edge stress singularity in angleplied and crossplied fiber composite laminates. The solution was obtained using a method consisting of complex-variable stress function potentials and eigenfunction expansions. The required order of the boundary-layer stress singularity is determined by solving the transcendental characteristic equation obtained from the homogeneous solution of the partial differential equations. Numerical results obtained show that the boundary-layer stress singularity depends only upon material elastic constants and fiber orientation of the adjacent plies. For angleplied and crossplied laminates the order of the singularity is weak in general.

  19. On the Concept of Random Orientation in Far-Field Electromagnetic Scattering by Nonspherical Particles

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Yurkin, Maxim A.

    2017-01-01

    Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.

  20. Global adaptive control for uncertain nonaffine nonlinear hysteretic systems.

    PubMed

    Liu, Yong-Hua; Huang, Liangpei; Xiao, Dongming; Guo, Yong

    2015-09-01

    In this paper, the global output tracking is investigated for a class of uncertain nonlinear hysteretic systems with nonaffine structures. By combining the solution properties of the hysteresis model with the novel backstepping approach, a robust adaptive control algorithm is developed without constructing a hysteresis inverse. The proposed control scheme is further modified to tackle the bounded disturbances by adaptively estimating their bounds. It is rigorously proven that the designed adaptive controllers can guarantee global stability of the closed-loop system. Two numerical examples are provided to show the effectiveness of the proposed control schemes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. High-order FDTD methods for transverse electromagnetic systems in dispersive inhomogeneous media.

    PubMed

    Zhao, Shan

    2011-08-15

    This Letter introduces a novel finite-difference time-domain (FDTD) formulation for solving transverse electromagnetic systems in dispersive media. Based on the auxiliary differential equation approach, the Debye dispersion model is coupled with Maxwell's equations to derive a supplementary ordinary differential equation for describing the regularity changes in electromagnetic fields at the dispersive interface. The resulting time-dependent jump conditions are rigorously enforced in the FDTD discretization by means of the matched interface and boundary scheme. High-order convergences are numerically achieved for the first time in the literature in the FDTD simulations of dispersive inhomogeneous media. © 2011 Optical Society of America

  2. Optical near-field analysis of spherical metals: Application of the FDTD method combined with the ADE method.

    PubMed

    Yamaguchi, Takashi; Hinata, Takashi

    2007-09-03

    The time-average energy density of the optical near-field generated around a metallic sphere is computed using the finite-difference time-domain method. To check the accuracy, the numerical results are compared with the rigorous solutions by Mie theory. The Lorentz-Drude model, which is coupled with Maxwell's equation via motion equations of an electron, is applied to simulate the dispersion relation of metallic materials. The distributions of the optical near-filed generated around a metallic hemisphere and a metallic spheroid are also computed, and strong optical near-fields are obtained at the rim of them.

  3. Observations and Numerical Modeling of the 2012 Haida Gwaii Tsunami off the Coast of British Columbia

    NASA Astrophysics Data System (ADS)

    Fine, Isaac V.; Cherniawsky, Josef Y.; Thomson, Richard E.; Rabinovich, Alexander B.; Krassovski, Maxim V.

    2015-03-01

    A major ( M w 7.7) earthquake occurred on October 28, 2012 along the Queen Charlotte Fault Zone off the west coast of Haida Gwaii (formerly the Queen Charlotte Islands). The earthquake was the second strongest instrumentally recorded earthquake in Canadian history and generated the largest local tsunami ever recorded on the coast of British Columbia. A field survey on the Pacific side of Haida Gwaii revealed maximum runup heights of up to 7.6 m at sites sheltered from storm waves and 13 m in a small inlet that is less sheltered from storms (L eonard and B ednarski 2014). The tsunami was recorded by tide gauges along the coast of British Columbia, by open-ocean bottom pressure sensors of the NEPTUNE facility at Ocean Networks Canada's cabled observatory located seaward of southwestern Vancouver Island, and by several DART stations located in the northeast Pacific. The tsunami observations, in combination with rigorous numerical modeling, enabled us to determine the physical properties of this event and to correct the location of the tsunami source with respect to the initial geophysical estimates. The initial model results were used to specify sites of particular interest for post-tsunami field surveys on the coast of Moresby Island (Haida Gwaii), while field survey observations (L eonard and B ednarski 2014) were used, in turn, to verify the numerical simulations based on the corrected source region.

  4. Troyer Syndrome

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...

  5. Network-based stochastic semisupervised learning.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.

  6. Determination of Diffusion Coefficients in Cement-Based Materials: An Inverse Problem for the Nernst-Planck and Poisson Models

    NASA Astrophysics Data System (ADS)

    Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert

    2016-08-01

    Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.

  7. Symbolic Number Comparison Is Not Processed by the Analog Number System: Different Symbolic and Non-symbolic Numerical Distance and Size Effects

    PubMed Central

    Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia

    2018-01-01

    HIGHLIGHTS We test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems. Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS) working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation. PMID:29491845

  8. Collisional damping rates for plasma waves

    NASA Astrophysics Data System (ADS)

    Tigik, S. F.; Ziebell, L. F.; Yoon, P. H.

    2016-06-01

    The distinction between the plasma dynamics dominated by collisional transport versus collective processes has never been rigorously addressed until recently. A recent paper [P. H. Yoon et al., Phys. Rev. E 93, 033203 (2016)] formulates for the first time, a unified kinetic theory in which collective processes and collisional dynamics are systematically incorporated from first principles. One of the outcomes of such a formalism is the rigorous derivation of collisional damping rates for Langmuir and ion-acoustic waves, which can be contrasted to the heuristic customary approach. However, the results are given only in formal mathematical expressions. The present brief communication numerically evaluates the rigorous collisional damping rates by considering the case of plasma particles with Maxwellian velocity distribution function so as to assess the consequence of the rigorous formalism in a quantitative manner. Comparison with the heuristic ("Spitzer") formula shows that the accurate damping rates are much lower in magnitude than the conventional expression, which implies that the traditional approach over-estimates the importance of attenuation of plasma waves by collisional relaxation process. Such a finding may have a wide applicability ranging from laboratory to space and astrophysical plasmas.

  9. Transient Ischemic Attack

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...

  10. Morphing continuum theory for turbulence: Theory, computation, and visualization.

    PubMed

    Chen, James

    2017-10-01

    A high order morphing continuum theory (MCT) is introduced to model highly compressible turbulence. The theory is formulated under the rigorous framework of rational continuum mechanics. A set of linear constitutive equations and balance laws are deduced and presented from the Coleman-Noll procedure and Onsager's reciprocal relations. The governing equations are then arranged in conservation form and solved through the finite volume method with a second-order Lax-Friedrichs scheme for shock preservation. A numerical example of transonic flow over a three-dimensional bump is presented using MCT and the finite volume method. The comparison shows that MCT-based direct numerical simulation (DNS) provides a better prediction than Navier-Stokes (NS)-based DNS with less than 10% of the mesh number when compared with experiments. A MCT-based and frame-indifferent Q criterion is also derived to show the coherent eddy structure of the downstream turbulence in the numerical example. It should be emphasized that unlike the NS-based Q criterion, the MCT-based Q criterion is objective without the limitation of Galilean invariance.

  11. Stochastic Ocean Predictions with Dynamically-Orthogonal Primitive Equations

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P., Jr.; Lermusiaux, P. F. J.

    2017-12-01

    The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex and intermittent with unstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. For efficient and rigorous quantification and prediction of these uncertainities, the stochastic Dynamically Orthogonal (DO) PDEs for a primitive equation ocean modeling system with a nonlinear free-surface are derived and numerical schemes for their space-time integration are obtained. Detailed numerical studies with idealized-to-realistic regional ocean dynamics are completed. These include consistency checks for the numerical schemes and comparisons with ensemble realizations. As an illustrative example, we simulate the 4-d multiscale uncertainty in the Middle Atlantic/New York Bight region during the months of Jan to Mar 2017. To provide intitial conditions for the uncertainty subspace, uncertainties in the region were objectively analyzed using historical data. The DO primitive equations were subsequently integrated in space and time. The probability distribution function (pdf) of the ocean fields is compared to in-situ, remote sensing, and opportunity data collected during the coincident POSYDON experiment. Results show that our probabilistic predictions had skill and are 3- to 4- orders of magnitude faster than classic ensemble schemes.

  12. Numerical and Experimental Investigation of the Effects of Acceleration Disturbances on Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Ramachandran, Narayanan

    2000-01-01

    Normal vibrational modes on large spacecraft are excited by crew activity, operating machinery, and other mechanical disturbances. Periodic engine burns for maintaining vehicle attitude and random impulse type disturbances also contribute to the acceleration environment of a Spacecraft. Accelerations from these vibrations (often referred to as g-jitter) are several orders of magnitude larger than the residual accelerations from atmospheric drag and gravity gradient effects. Naturally, the effects of such accelerations have been a concern to prospective experimenters wishing to take advantage of the microgravity environment offered by spacecraft operating in low Earth orbit and the topic has been studied extensively, both numerically and analytically. However, these studies have not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter could use to assess how his/her experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling thereby providing comparative theoretical and experimental data. The modeling, it is hoped will provide a predictive tool that can be used for assessing experiment response to Spacecraft vibrations.

  13. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  14. Differential geometry based solvation model I: Eulerian formulation

    NASA Astrophysics Data System (ADS)

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-11-01

    This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the solvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By optimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second-order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature.

  15. Differential geometry based solvation model I: Eulerian formulation

    PubMed Central

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-01-01

    This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the salvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By minimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature. PMID:20938489

  16. A private DNA motif finding algorithm.

    PubMed

    Chen, Rui; Peng, Yun; Choi, Byron; Xu, Jianliang; Hu, Haibo

    2014-08-01

    With the increasing availability of genomic sequence data, numerous methods have been proposed for finding DNA motifs. The discovery of DNA motifs serves a critical step in many biological applications. However, the privacy implication of DNA analysis is normally neglected in the existing methods. In this work, we propose a private DNA motif finding algorithm in which a DNA owner's privacy is protected by a rigorous privacy model, known as ∊-differential privacy. It provides provable privacy guarantees that are independent of adversaries' background knowledge. Our algorithm makes use of the n-gram model and is optimized for processing large-scale DNA sequences. We evaluate the performance of our algorithm over real-life genomic data and demonstrate the promise of integrating privacy into DNA motif finding. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, R.M.

    A new statistical model (the quantum-statistical model (QSM)) was recently introduced by Kalitkin and Kuzmina for the calculation of thermodynamic properties of compressed matter. This paper examines the QSM and gives (i) a numerical QSM calculation of pressure and energy for aluminum and comparison to existing augmented-plane-wave data; (ii) display of separate kinetic, exchange, and quantum pressure terms; (iii) a study of electron density at the nucleus; (iv) a study of the effects of the Kirzhnitz-Weizsacker parameter controlling the gradient terms; (v) an analytic expansion for very high densities; and (vi) rigorous pressure theorems including a general version of themore » virial theorem which applies to an arbitrary microscopic volume. It is concluded that the QSM represents the most accurate and consistent theory of the Thomas-Fermi type.« less

  18. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  19. Hybrid Rocket Performance Prediction with Coupling Method of CFD and Thermal Conduction Calculation

    NASA Astrophysics Data System (ADS)

    Funami, Yuki; Shimada, Toru

    The final purpose of this study is to develop a design tool for hybrid rocket engines. This tool is a computer code which will be used in order to investigate rocket performance characteristics and unsteady phenomena lasting through the burning time, such as fuel regression or combustion oscillation. When phenomena inside a combustion chamber, namely boundary layer combustion, are described, it is difficult to use rigorous models for this target. It is because calculation cost may be too expensive. Therefore simple models are required for this calculation. In this study, quasi-one-dimensional compressible Euler equations for flowfields inside a chamber and the equation for thermal conduction inside a solid fuel are numerically solved. The energy balance equation at the solid fuel surface is solved to estimate fuel regression rate. Heat feedback model is Karabeyoglu's model dependent on total mass flux. Combustion model is global single step reaction model for 4 chemical species or chemical equilibrium model for 9 chemical species. As a first step, steady-state solutions are reported.

  20. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2010-01-01

    A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.

  1. Television camera as a scientific instrument

    NASA Technical Reports Server (NTRS)

    Smokler, M. I.

    1970-01-01

    Rigorous calibration program, coupled with a sophisticated data-processing program that introduced compensation for system response to correct photometry, geometric linearity, and resolution, converted a television camera to a quantitative measuring instrument. The output data are in the forms of both numeric printout records and photographs.

  2. Components of Students' Grade Expectations for Public Speaking Assignments

    ERIC Educational Resources Information Center

    Larseingue, Matt; Sawyer, Chris R.; Finn, Amber N.

    2012-01-01

    Although previous research has linked students' expected grades to numerous pedagogical variables, this factor has been all but ignored by instructional communication scholars. In the present study, 315 undergraduates were presented with grading scenarios representing differing combinations of course rigor, teacher immediacy, and student…

  3. Snoring and its management.

    PubMed

    Calhoun, Karen H; Templer, Jerry; Patenaude, Bart

    2006-01-01

    There are numerous strategies, devices and procedures available to treat snoring. The surgical procedures have an overall success rate of 60-70%, but this probably decreases over time, especially if there is weight gain. There are no long-term rigorously-designed studies comparing the various procedures for decreasing snoring.

  4. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  5. The estimation of uniaxial compressive strength conversion factor of trona and interbeds from point load tests and numerical modeling

    NASA Astrophysics Data System (ADS)

    Ozturk, H.; Altinpinar, M.

    2017-07-01

    The point load (PL) test is generally used for estimation of uniaxial compressive strength (UCS) of rocks because of its economic advantages and simplicity in testing. If the PL index of a specimen is known, the UCS can be estimated using conversion factors. Several conversion factors have been proposed by various researchers and they are dependent upon the rock type. In the literature, conversion factors on different sedimentary, igneous and metamorphic rocks can be found, but no study exists on trona. In this study, laboratory UCS and field PL tests were carried out on trona and interbeds of volcano-sedimentary rocks. Based on these tests, PL to UCS conversion factors of trona and interbeds are proposed. The tests were modeled numerically using a distinct element method (DEM) software, particle flow code (PFC), in an attempt to guide researchers having various types of modeling problems (excavation, cavern design, hydraulic fracturing, etc.) of the abovementioned rock types. Average PFC parallel bond contact model micro properties for the trona and interbeds were determined within this study so that future researchers can use them to avoid the rigorous PFC calibration procedure. It was observed that PFC overestimates the tensile strength of the rocks by a factor that ranges from 22 to 106.

  6. Welding arc plasma physics

    NASA Technical Reports Server (NTRS)

    Cain, Bruce L.

    1990-01-01

    The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.

  7. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  8. VLF Trimpi modelling on the path NWC-Dunedin using both finite element and 3D Born modelling

    NASA Astrophysics Data System (ADS)

    Nunn, D.; Hayakawa, K. B. M.

    1998-10-01

    This paper investigates the numerical modelling of VLF Trimpis, produced by a D region inhomogeneity on the great circle path. Two different codes are used to model Trimpis on the path NWC-Dunedin. The first is a 2D Finite Element Method Code (FEM), whose solutions are rigorous and valid in the strong scattering or non-Born limit. The second code is a 3D model that invokes the Born approximation. The predicted Trimpis from these codes compare very closely, thus confirming the validity of both models. The modal scattering matrices for both codes are analysed in some detail and are found to have a comparable structure. They indicate strong scattering between the dominant TM modes. Analysis of the scattering matrix from the FEM code shows that departure from linear Born behaviour occurs when the inhomogeneity has a horizontal scale size of about 100 km and a maximum electron density enhancement at 75 km altitude of about 6 electrons.

  9. Matter Gravitates, but Does Gravity Matter?

    ERIC Educational Resources Information Center

    Groetsch, C. W.

    2011-01-01

    The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…

  10. Three-Dimensional Dynamic Deformation Measurements Using Stereoscopic Imaging and Digital Speckle Photography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prentice, H. J.; Proud, W. G.

    2006-07-28

    A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less

  11. Transonic flow about a thick circular-arc airfoil

    NASA Technical Reports Server (NTRS)

    Mcdevitt, J. B.; Levy, L. L., Jr.; Deiwert, G. S.

    1975-01-01

    An experimental and theoretical study of transonic flow over a thick airfoil, prompted by a need for adequately documented experiments that could provide rigorous verification of viscous flow simulation computer codes, is reported. Special attention is given to the shock-induced separation phenomenon in the turbulent regime. Measurements presented include surface pressures, streamline and flow separation patterns, and shadowgraphs. For a limited range of free-stream Mach numbers the airfoil flow field is found to be unsteady. Dynamic pressure measurements and high-speed shadowgraph movies were taken to investigate this phenomenon. Comparisons of experimentally determined and numerically simulated steady flows using a new viscous-turbulent code are also included. The comparisons show the importance of including an accurate turbulence model. When the shock-boundary layer interaction is weak the turbulence model employed appears adequate, but when the interaction is strong, and extensive regions of separation are present, the model is inadequate and needs further development.

  12. Pattern Formation in Keller-Segel Chemotaxis Models with Logistic Growth

    NASA Astrophysics Data System (ADS)

    Jin, Ling; Wang, Qi; Zhang, Zengyan

    In this paper, we investigate pattern formation in Keller-Segel chemotaxis models over a multidimensional bounded domain subject to homogeneous Neumann boundary conditions. It is shown that the positive homogeneous steady state loses its stability as chemoattraction rate χ increases. Then using Crandall-Rabinowitz local theory with χ being the bifurcation parameter, we obtain the existence of nonhomogeneous steady states of the system which bifurcate from this homogeneous steady state. Stability of the bifurcating solutions is also established through rigorous and detailed calculations. Our results provide a selection mechanism of stable wavemode which states that the only stable bifurcation branch must have a wavemode number that minimizes the bifurcation value. Finally, we perform extensive numerical simulations on the formation of stable steady states with striking structures such as boundary spikes, interior spikes, stripes, etc. These nontrivial patterns can model cellular aggregation that develop through chemotactic movements in biological systems.

  13. A Note on Spatial Averaging and Shear Stresses Within Urban Canopies

    NASA Astrophysics Data System (ADS)

    Xie, Zheng-Tong; Fuka, Vladimir

    2018-04-01

    One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.

  14. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.

  15. Rise and fall of political complexity in island South-East Asia and the Pacific.

    PubMed

    Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth

    2010-10-14

    There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.

  16. Horseshoes in a Chaotic System with Only One Stable Equilibrium

    NASA Astrophysics Data System (ADS)

    Huan, Songmei; Li, Qingdu; Yang, Xiao-Song

    To confirm the numerically demonstrated chaotic behavior in a chaotic system with only one stable equilibrium reported by Wang and Chen, we resort to Poincaré map technique and present a rigorous computer-assisted verification of horseshoe chaos by virtue of topological horseshoes theory.

  17. Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors

    ERIC Educational Resources Information Center

    Brogt, Erik; Draeger, John D.

    2015-01-01

    We discuss a model of academic rigor and apply this to a general education introductory astronomy course. We argue that even without central tenets of professional astronomy-the use of mathematics--the course can still be considered academically rigorous when expectations, goals, assessments, and curriculum are properly aligned.

  18. Invariant Tori in the Secular Motions of the Three-body Planetary Systems

    NASA Astrophysics Data System (ADS)

    Locatelli, Ugo; Giorgilli, Antonio

    We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.

  19. Orbital, Rotational and Climatic Interactions: Energy Dissipation and Angular Momentum Exchange in the Earth-Moon System

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.

    2001-01-01

    A numerical ocean tide model has been developed and tested using highly accurate TOPEX/Poseidon (T/P) tidal solutions. The hydrodynamic model is based on time stepping a finite difference approximation to the non-linear shallow water equations. Two novel features of our implementation are a rigorous treatment of self attraction and loading (SAL), and a physically based parameterization for internal tide (IT) radiation drag. The model was run for a range of grid resolutions, and with variations in model parameters and bathymetry. For a rational treatment of SAL and IT drag, the model run at high resolution (1/12 degree) fits the T/P solutions to within 5 cm RMS in the open ocean. Both the rigorous SAL treatment and the IT drag parameterization are required to obtain solutions of this quality. The sensitivity of the solution to perturbations in bathymetry suggest that the fit to T/P is probably now limited by errors in this critical input. Since the model is not constrained by any data, we can test the effect of dropping sea-level to match estimated bathymetry from the last glacial maximum (LGM). Our results suggest that the 100 m drop in sea-level in the LGM would have significantly increased tidal amplitudes in the North Atlantic, and increased overall tidal dissipation by about 40%. However, details in tidal solutions for the past 20 ka are sensitive to the assumed stratification. IT drag accounts for a significant fraction of dissipation, especially in the LGM when large areas of present day shallow sea were exposed, and this parameter is poorly constrained at present.

  20. On the possibility of observing bound soliton pairs in a wave-breaking-free mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Martel, G.; Chédot, C.; Réglier, V.; Hideur, A.; Ortaç, B.; Grelu, Ph.

    2007-02-01

    On the basis of numerical simulations, we explain the formation of the stable bound soliton pairs that were experimentally reported in a high-power mode-locked ytterbium fiber laser [Opt. Express 14, 6075 (2006)], in a regime where wave-breaking-free operation is expected. A fully vectorial model allows one to rigorously reproduce the nonmonotonic nature for the nonlinear polarization effect that generally limits the power scalability of a single-pulse self-similar regime. Simulations show that a self-similar regime is not fully obtained, although positive linear chirps and parabolic spectra are always reported. As a consequence, nonvanishing pulse tails allow distant stable binding of highly-chirped pulses.

  1. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  2. Quantum nature of the big bang.

    PubMed

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  3. Gaps, Pseudogaps, and the Nature of Charge in Holographic Fermion Models

    NASA Astrophysics Data System (ADS)

    Vanacore, Garrett; Phillips, Philip

    Building on prior holographic constructions of Fermi arcs and Mott physics, we investigate the landscape of gapped and gapless strongly-correlated phases resulting from bulk fermion interactions in gauge/gravity duality. We test a proposed connection between bulk chiral symmetry and gapless boundary states, and discuss implications for discrete symmetry breaking in pseudogapped systems like the cuprate superconductors. Numerical methods are used to treat gravitational backreaction of bulk fermions, allowing more rigorous investigation of the existence of holographic Fermi surfaces and their adherence to Luttinger's rule. We use these techniques to study deviations from Luttinger's rule in holography, testing a recent claim that momentum-deconfined charges are at the heart of the Mott state.

  4. PROPOSED SIAM PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAILEY, DAVID H.; BORWEIN, JONATHAN M.

    A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less

  5. Tri-critical behavior of the Blume-Emery-Griffiths model on a Kagomé lattice: Effective-field theory and Rigorous bounds

    NASA Astrophysics Data System (ADS)

    Santos, Jander P.; Sá Barreto, F. C.

    2016-01-01

    Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.

  6. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  7. A Phenomenological Analysis of Division III Student-Athletes' Transition out of College

    ERIC Educational Resources Information Center

    Covington, Sim Jonathan, Jr.

    2017-01-01

    Intercollegiate athletics is a major segment of numerous college and university communities across America today. Student-athletes participate in strenuous training and competition throughout their college years while managing to balance the rigorous academic curriculum of the higher education environment. This research aims to explore the…

  8. Predicting Observer Training Satisfaction and Certification

    ERIC Educational Resources Information Center

    Bell, Courtney A.; Jones, Nathan D.; Lewis, Jennifer M.; Liu, Shuangshuang

    2013-01-01

    The last decade produced numerous studies that show that students learn more from high-quality teachers than they do from lower quality teachers. If instruction is to improve through the use of more rigorous teacher evaluation systems, the implementation of these systems must provide consistent and interpretable information about which aspects of…

  9. A Practical Guide to Regression Discontinuity

    ERIC Educational Resources Information Center

    Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard

    2012-01-01

    Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…

  10. Randomized Trial of Hyperbaric Oxygen Therapy for Children with Autism

    ERIC Educational Resources Information Center

    Granpeesheh, Doreen; Tarbox, Jonathan; Dixon, Dennis R.; Wilke, Arthur E.; Allen, Michael S.; Bradstreet, James Jeffrey

    2010-01-01

    Autism Spectrum Disorders (ASDs) are characterized by the presence of impaired development in social interaction and communication and the presence of a restricted repertoire of activity and interests. While numerous treatments for ASDs have been proposed, very few have been subjected to rigorous scientific investigation. Hyperbaric oxygen therapy…

  11. How to Teach Hicksian Compensation and Duality Using a Spreadsheet Optimizer

    ERIC Educational Resources Information Center

    Ghosh, Satyajit; Ghosh, Sarah

    2007-01-01

    Principle of duality and numerical calculation of income and substitution effects under Hicksian Compensation are often left out of intermediate microeconomics courses because they require a rigorous calculus based analysis. But these topics are critically important for understanding consumer behavior. In this paper we use excel solver--a…

  12. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  13. Rigorous modal analysis of plasmonic nanoresonators

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Faggiani, Rémi; Lalanne, Philippe

    2018-05-01

    The specificity of modal-expansion formalisms is their capabilities to model the physical properties in the natural resonance-state basis of the system in question, leading to a transparent interpretation of the numerical results. In electromagnetism, modal-expansion formalisms are routinely used for optical waveguides. In contrast, they are much less mature for analyzing open non-Hermitian systems, such as micro- and nanoresonators. Here, by accounting for material dispersion with auxiliary fields, we considerably extend the capabilities of these formalisms, in terms of computational effectiveness, number of states handled, and range of validity. We implement an efficient finite-element solver to compute the resonance states, and derive closed-form expressions of the modal excitation coefficients for reconstructing the scattered fields. Together, these two achievements allow us to perform rigorous modal analysis of complicated plasmonic resonators, being not limited to a few resonance states, with straightforward physical interpretations and remarkable computation speeds. We particularly show that, when the number of states retained in the expansion increases, convergence toward accurate predictions is achieved, offering a solid theoretical foundation for analyzing important issues, e.g., Fano interference, quenching, and coupling with the continuum, which are critical in nanophotonic research.

  14. Improving the ideal and human observer consistency: a demonstration of principles

    NASA Astrophysics Data System (ADS)

    He, Xin

    2017-03-01

    In addition to being rigorous and realistic, the usefulness of the ideal observer computational tools may also depend on whether they serve the empirical purpose for which they are created, e.g. to identify desirable imaging systems to be used by human observers. In SPIE 10136-35, I have shown that the ideal and the human observers do not necessarily prefer the same system as the optimal or better one due to their different objectives in both hardware and software optimization. In this work, I attempt to identify a necessary but insufficient condition under which the human and the ideal observer may rank systems consistently. If corroborated, such a condition allows a numerical test on the ideal/human consistency without routine human observer studies. I reproduced data from Abbey et al. JOSA 2001 to verify the proposed condition (i.e., not a rigorous falsification study due to the lack of specificity in the proposed conjecture. A roadmap for more falsifiable conditions is proposed). Via this work, I would like to emphasize the reality of practical decision making in addition to the realism in mathematical modeling. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA.)

  15. Developing a Student Conception of Academic Rigor

    ERIC Educational Resources Information Center

    Draeger, John; del Prado Hill, Pixita; Mahler, Ronnie

    2015-01-01

    In this article we describe models of academic rigor from the student point of view. Drawing on a campus-wide survey, focus groups, and interviews with students, we found that students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals. These findings…

  16. Magnetic Local Time dependency in modeling of the Earth radiation belts

    NASA Astrophysics Data System (ADS)

    Herrera, Damien; Maget, Vincent; Bourdarie, Sébastien; Rolland, Guy

    2017-04-01

    For many years, ONERA has been at the forefront of the modeling of the Earth radiation belts thanks to the Salammbô model, which accurately reproduces their dynamics over a time scale of the particles' drift period. This implies that we implicitly assume an homogeneous repartition of the trapped particles along a given drift shell. However, radiation belts are inhomogeneous in Magnetic Local Time (MLT). So, we need to take this new coordinate into account to model rigorously the dynamical structures, particularly induced during a geomagnetic storm. For this purpose, we are working on both the numerical resolution of the Fokker-Planck diffusion equation included in the model and on the MLT dependency of physic-based processes acting in the Earth radiation belts. The aim of this talk is first to present the 4D-equation used and the different steps we used to build Salammbô 4D model before focusing on physical processes taken into account in the Salammbô code, specially transport due to convection electric field. Firstly, we will briefly introduce the Salammbô 4D code developped by talking about its numerical scheme and physic-based processes modeled. Then, we will focus our attention on the impact of the outer boundary condition (localisation and spectrum) at lower L∗ shell by comparing modeling performed with geosynchronous data from LANL-GEO satellites. Finally, we will discuss the prime importance of the convection electric field to the radial and drift transport of low energy particles around the Earth.

  17. Scaling effects in resonant coupling phenomena between fundamental and cladding modes in twisted microstructured optical fibers.

    PubMed

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2018-04-30

    We show that in twisted microstructured optical fibers (MOFs) the coupling between the core and cladding modes can be obtained for helix pitch much greater than previously considered. We provide an analytical model describing scaling properties of the twisted MOFs, which relates coupling conditions to dimensionless ratios between the wavelength, the lattice pitch and the helix pitch of the twisted fiber. Furthermore, we verify our model using a rigorous numerical method based on the transformation optics formalism and study its limitations. The obtained results show that for appropriately designed twisted MOFs, distinct, high loss resonance peaks can be obtained in a broad wavelength range already for the fiber with 9 mm helix pitch, thus allowing for fabrication of coupling based devices using a less demanding method involving preform spinning.

  18. Simulation of Plasma Jet Merger and Liner Formation within the PLX- α Project

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Chen, Hsin-Chiang; Shih, Wen; Hsu, Scott

    2015-11-01

    Detailed numerical studies of the propagation and merger of high Mach number argon plasma jets and the formation of plasma liners have been performed using the newly developed method of Lagrangian particles (LP). The LP method significantly improves accuracy and mathematical rigor of common particle-based numerical methods such as smooth particle hydrodynamics while preserving their main advantages compared to grid-based methods. A brief overview of the LP method will be presented. The Lagrangian particle code implements main relevant physics models such as an equation of state for argon undergoing atomic physics transformation, radiation losses in thin optical limit, and heat conduction. Simulations of the merger of two plasma jets are compared with experimental data from past PLX experiments. Simulations quantify the effect of oblique shock waves, ionization, and radiation processes on the jet merger process. Results of preliminary simulations of future PLX- alpha experiments involving the ~ π / 2 -solid-angle plasma-liner configuration with 9 guns will also be presented. Partially supported by ARPA-E's ALPHA program.

  19. Second-order Poisson Nernst-Planck solver for ion channel transport

    PubMed Central

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are compared with experimental measurements. PMID:21552336

  20. Using Computational and Mechanical Models to Study Animal Locomotion

    PubMed Central

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026

  1. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  2. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  3. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  4. A Rigorous Solution for Finite-State Inflow throughout the Flowfield

    NASA Astrophysics Data System (ADS)

    Fei, Zhongyang

    In this research, the Hseih/Duffy model is extended to all three velocity components of inflow across the rotor disk in a mathematically rigorous way so that it can be used to calculate the inflow below the rotor disk plane. This establishes a complete dynamic inflow model for the entire flow field with finite state method. The derivation is for the case of general skewed angle. The cost of the new method is that one needs to compute the co-states of the inflow equations in the upper hemisphere along with the normal states. Numerical comparisons with exact solutions for the z-component of flow in axial and skewed angle flow demonstrate excellent correlation with closed-form solutions. The simulations also illustrate that the model is valid at both the frequency domain and the time domain. Meanwhile, in order to accelerate the convergence, an optimization of even terms is used to minimize the error in the axial component of the induced velocity in the on and on/off disk region. A novel method for calculating associate Legendre function of the second kind is also developed to solve the problem of divergence of Q¯mn (ieta) for large eta with the iterative method. An application of the new model is also conducted to compute inflow in the wake of a rotor with a finite number of blades. The velocities are plotted at different distances from the rotor disk and are compared with the Glauert prediction for axial flow and wake swirl. In the finite-state model, the angular momentum does not jump instantaneously across the disk, but it does transition rapidly across the disk to correct Glauert value.

  5. On a viscous critical-stress model of martensitic phase transitions

    NASA Astrophysics Data System (ADS)

    Weatherwax, John; Vaynblat, Dimitri; Bruno, Oscar; Rosales, Ruben

    2007-09-01

    The solid-to-solid phase transitions that result from shock loading of certain materials, such as the graphite-to-diamond transition and the α-ɛ transition in iron, have long been subjects of a substantial theoretical and experimental literature. Recently a model for such transitions was introduced which, based on a CS condition (CS) and without use of fitting parameters, accounts quantitatively for existing observations in a number of systems [Bruno and Vaynblat, Proc. R. Soc. London, Ser. A 457, 2871 (2001)]. While the results of the CS model match the main features of the available experimental data, disagreements in some details between the predictions of this model and experiment, attributable to an ideal character of the CS model, do exist. In this article we present a version of the CS model, the viscous CS model (vCS), as well as a numerical method for its solution. This model and the corresponding solver results in a much improved overall CS modeling capability. The innovations we introduce include: (1) Enhancement of the model by inclusion of viscous phase-transition effects; as well as a numerical solver that allows for a fully rigorous treatment of both, the (2) Rarefaction fans (which had previously been approximated by "rarefaction discontinuities"), and (3) viscous phase-transition effects, that are part of the vCS model. In particular we show that the vCS model accounts accurately for well known "gradual" rises in the α-ɛ transition which, in the original CS model, were somewhat crudely approximated as jump discontinuities.

  6. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  7. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  8. A Rigorous Framework for Optimization of Expensive Functions by Surrogates

    NASA Technical Reports Server (NTRS)

    Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.

  9. Numerical emulation of Thru-Reflection-Line calibration for the de-embedding of Surface Acoustic Wave devices.

    PubMed

    Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L

    2018-06-18

    In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.

  10. Computing Generalized Matrix Inverse on Spiking Neural Substrate.

    PubMed

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  11. Determination of thermal wave reflection coefficient to better estimate defect depth using pulsed thermography

    NASA Astrophysics Data System (ADS)

    Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn

    2017-11-01

    Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.

  12. Toward a physics-based rate and state friction law for earthquake nucleation processes in fault zones with granular gouge

    NASA Astrophysics Data System (ADS)

    Ferdowsi, B.; Rubin, A. M.

    2017-12-01

    Numerical simulations of earthquake nucleation rely on constitutive rate and state evolution laws to model earthquake initiation and propagation processes. The response of different state evolution laws to large velocity increases is an important feature of these constitutive relations that can significantly change the style of earthquake nucleation in numerical models. However, currently there is not a rigorous understanding of the physical origins of the response of bare rock or gouge-filled fault zones to large velocity increases. This in turn hinders our ability to design physics-based friction laws that can appropriately describe those responses. We here argue that most fault zones form a granular gouge after an initial shearing phase and that it is the behavior of the gouge layer that controls the fault friction. We perform numerical experiments of a confined sheared granular gouge under a range of confining stresses and driving velocities relevant to fault zones and apply 1-3 order of magnitude velocity steps to explore dynamical behavior of the system from grain- to macro-scales. We compare our numerical observations with experimental data from biaxial double-direct-shear fault gouge experiments under equivalent loading and driving conditions. Our intention is to first investigate the degree to which these numerical experiments, with Hertzian normal and Coulomb friction laws at the grain-grain contact scale and without any time-dependent plasticity, can reproduce experimental fault gouge behavior. We next compare the behavior observed in numerical experiments with predictions of the Dieterich (Aging) and Ruina (Slip) friction laws. Finally, the numerical observations at the grain and meso-scales will be used for designing a rate and state evolution law that takes into account recent advances in rheology of granular systems, including local and non-local effects, for a wide range of shear rates and slow and fast deformation regimes of the fault gouge.

  13. Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.

    PubMed

    Bohley, Christian; Heuer, Jana; Stannarius, Ralf

    2005-12-01

    We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.

  14. Inspiration & Insight - a tribute to Niels Reeh

    NASA Astrophysics Data System (ADS)

    Ahlstrom, A. P.; Vieli, A.

    2009-12-01

    Niels Reeh was highly regarded for his contributions to glaciology, specifically through his rigorous combination of numerical modelling and field observations. In 1966 he began his work on the application of beam mechanics to floating glaciers and ice shelves and throughout his life, Niels retained a strong interest in modelling glacier dynamics. In the early 1980s Niels developed a 3D-model for ice sheets and in the late 1980s an advanced flow-line model. Niels Reeh also took part in the early ice-core drilling efforts in Greenland and later pioneered the concept of retrieving similar records from the surface of the ice-sheet margin. Mass balance of glaciers and ice sheets was another theme in Niels Reeh’s research, with a number of important contributions and insights still used when teaching the subject to students. Niels developed elegant models for ablation and snow densification, notable for their applicability in large-scale ice-sheet models and studied the impact of climate change on ice sheets and glaciers. Niels also took his interest in ice-dynamics and mass balance into remote sensing and worked successfully on methods to utilize radar and laser data from airborne surveys and satellites in glaciology. In this, he pioneered the combination of field experiments, satellite observations and numerical modelling to solve problems on the Greenland Ice Sheet. In this presentation we will attempt to provide an overview of Niels Reeh’s many-facetted career in acknowledgement of his contributions to the field of glaciology.

  15. Maximization of permanent trapping of CO{sub 2} and co-contaminants in the highest-porosity formations of the Rock Springs Uplift (Southwest Wyoming): experimentation and multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piri, Mohammad

    2014-03-31

    Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-­brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account themore » underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-­conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-­based dynamic core-­scale pore network model; (4) Development of new, improved high-­performance modules for the UW-­team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore-­ and core-­scale models were rigorously validated against well-­characterized core-­ flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-­resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.« less

  16. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  17. Non-Maxwellian fast particle effects in gyrokinetic GENE simulations

    NASA Astrophysics Data System (ADS)

    Di Siena, A.; Görler, T.; Doerk, H.; Bilato, R.; Citrin, J.; Johnson, T.; Schneider, M.; Poli, E.; JET Contributors

    2018-04-01

    Fast ions have recently been found to significantly impact and partially suppress plasma turbulence both in experimental and numerical studies in a number of scenarios. Understanding the underlying physics and identifying the range of their beneficial effect is an essential task for future fusion reactors, where highly energetic ions are generated through fusion reactions and external heating schemes. However, in many of the gyrokinetic codes fast ions are, for simplicity, treated as equivalent-Maxwellian-distributed particle species, although it is well known that to rigorously model highly non-thermalised particles, a non-Maxwellian background distribution function is needed. To study the impact of this assumption, the gyrokinetic code GENE has recently been extended to support arbitrary background distribution functions which might be either analytical, e.g., slowing down and bi-Maxwellian, or obtained from numerical fast ion models. A particular JET plasma with strong fast-ion related turbulence suppression is revised with these new code capabilities both with linear and nonlinear gyrokinetic simulations. It appears that the fast ion stabilization tends to be less strong but still substantial with more realistic distributions, and this improves the quantitative power balance agreement with experiments.

  18. Uncertainty Analysis of OC5-DeepCwind Floating Semisubmersible Offshore Wind Test Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N

    This paper examines how to assess the uncertainty levels for test measurements of the Offshore Code Comparison, Continued, with Correlation (OC5)-DeepCwind floating offshore wind system, examined within the OC5 project. The goal of the OC5 project was to validate the accuracy of ultimate and fatigue load estimates from a numerical model of the floating semisubmersible using data measured during scaled tank testing of the system under wind and wave loading. The examination of uncertainty was done after the test, and it was found that the limited amount of data available did not allow for an acceptable uncertainty assessment. Therefore, thismore » paper instead qualitatively examines the sources of uncertainty associated with this test to start a discussion of how to assess uncertainty for these types of experiments and to summarize what should be done during future testing to acquire the information needed for a proper uncertainty assessment. Foremost, future validation campaigns should initiate numerical modeling before testing to guide the test campaign, which should include a rigorous assessment of uncertainty, and perform validation during testing to ensure that the tests address all of the validation needs.« less

  19. Uncertainty Analysis of OC5-DeepCwind Floating Semisubmersible Offshore Wind Test Campaign: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N

    This paper examines how to assess the uncertainty levels for test measurements of the Offshore Code Comparison, Continued, with Correlation (OC5)-DeepCwind floating offshore wind system, examined within the OC5 project. The goal of the OC5 project was to validate the accuracy of ultimate and fatigue load estimates from a numerical model of the floating semisubmersible using data measured during scaled tank testing of the system under wind and wave loading. The examination of uncertainty was done after the test, and it was found that the limited amount of data available did not allow for an acceptable uncertainty assessment. Therefore, thismore » paper instead qualitatively examines the sources of uncertainty associated with this test to start a discussion of how to assess uncertainty for these types of experiments and to summarize what should be done during future testing to acquire the information needed for a proper uncertainty assessment. Foremost, future validation campaigns should initiate numerical modeling before testing to guide the test campaign, which should include a rigorous assessment of uncertainty, and perform validation during testing to ensure that the tests address all of the validation needs.« less

  20. Performance assessment of Large Eddy Simulation (LES) for modeling dispersion in an urban street canyon with tree planting

    NASA Astrophysics Data System (ADS)

    Moonen, P.; Gromke, C.; Dorer, V.

    2013-08-01

    The potential of a Large Eddy Simulation (LES) model to reliably predict near-field pollutant dispersion is assessed. To that extent, detailed time-resolved numerical simulations of coupled flow and dispersion are conducted for a street canyon with tree planting. Different crown porosities are considered. The model performance is assessed in several steps, ranging from a qualitative comparison to measured concentrations, over statistical data analysis by means of scatter plots and box plots, up to the calculation of objective validation metrics. The extensive validation effort highlights and quantifies notable features and shortcomings of the model, which would otherwise remain unnoticed. The model performance is found to be spatially non-uniform. Closer agreement with measurement data is achieved near the canyon ends than for the central part of the canyon, and typical model acceptance criteria are satisfied more easily for the leeward than for the windward canyon wall. This demonstrates the need for rigorous model evaluation. Only quality-assured models can be used with confidence to support assessment, planning and implementation of pollutant mitigation strategies.

  1. User's manual for the generalized computer program system. Open-channel flow and sedimentation, TABS-2. Main text

    NASA Astrophysics Data System (ADS)

    Thomas, W. A.; McAnally, W. H., Jr.

    1985-07-01

    TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.

  2. Fermionic topological quantum states as tensor networks

    NASA Astrophysics Data System (ADS)

    Wille, C.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a tensor network formalism capable of capturing notions of topological order for quantum systems with fermionic components. At the heart of the formalism are axioms of fermionic matrix-product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models.

  3. Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.

    PubMed

    Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli

    2017-03-01

    In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.

  4. Integrating Pharmacology Topics in High School Biology and Chemistry Classes Improves Performance

    ERIC Educational Resources Information Center

    Schwartz-Bloom, Rochelle D.; Halpin, Myra J.

    2003-01-01

    Although numerous programs have been developed for Grade Kindergarten through 12 science education, evaluation has been difficult owing to the inherent problems conducting controlled experiments in the typical classroom. Using a rigorous experimental design, we developed and tested a novel program containing a series of pharmacology modules (e.g.,…

  5. Exploring the Role of Executive Functioning Measures for Social Competence Research

    ERIC Educational Resources Information Center

    Stichter, Janine P.; Christ, Shawn E.; Herzog, Melissa J.; O'Donnell, Rose M.; O'Connor, Karen V.

    2016-01-01

    Numerous research groups have consistently called for increased rigor within the evaluation of social programming to better understand pivotal factors to treatment outcomes. The underwhelming data on the essential features of social competence programs for students with behavior challenges may, in part, be attributed to the manner by which…

  6. Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.

    2017-04-01

    To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.

  7. Higher-order compositional modeling of three-phase flow in 3D fractured porous media based on cross-flow equilibrium

    NASA Astrophysics Data System (ADS)

    Moortgat, Joachim; Firoozabadi, Abbas

    2013-10-01

    Numerical simulation of multiphase compositional flow in fractured porous media, when all the species can transfer between the phases, is a real challenge. Despite the broad applications in hydrocarbon reservoir engineering and hydrology, a compositional numerical simulator for three-phase flow in fractured media has not appeared in the literature, to the best of our knowledge. In this work, we present a three-phase fully compositional simulator for fractured media, based on higher-order finite element methods. To achieve computational efficiency, we invoke the cross-flow equilibrium (CFE) concept between discrete fractures and a small neighborhood in the matrix blocks. We adopt the mixed hybrid finite element (MHFE) method to approximate convective Darcy fluxes and the pressure equation. This approach is the most natural choice for flow in fractured media. The mass balance equations are discretized by the discontinuous Galerkin (DG) method, which is perhaps the most efficient approach to capture physical discontinuities in phase properties at the matrix-fracture interfaces and at phase boundaries. In this work, we account for gravity and Fickian diffusion. The modeling of capillary effects is discussed in a separate paper. We present the mathematical framework, using the implicit-pressure-explicit-composition (IMPEC) scheme, which facilitates rigorous thermodynamic stability analyses and the computation of phase behavior effects to account for transfer of species between the phases. A deceptively simple CFL condition is implemented to improve numerical stability and accuracy. We provide six numerical examples at both small and larger scales and in two and three dimensions, to demonstrate powerful features of the formulation.

  8. The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.

    PubMed

    Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko

    2013-11-01

    Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Numerical computation of orbits and rigorous verification of existence of snapback repellers.

    PubMed

    Peng, Chen-Chang

    2007-03-01

    In this paper we show how analysis from numerical computation of orbits can be applied to prove the existence of snapback repellers in discrete dynamical systems. That is, we present a computer-assisted method to prove the existence of a snapback repeller of a specific map. The existence of a snapback repeller of a dynamical system implies that it has chaotic behavior [F. R. Marotto, J. Math. Anal. Appl. 63, 199 (1978)]. The method is applied to the logistic map and the discrete predator-prey system.

  10. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  11. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.

    Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less

  13. Traveling waves for the mass in mass model of granular chains

    DOE PAGES

    Kevrekidis, Panayotis G.; Stefanov, Atanas G.; Xu, Haitao

    2016-06-03

    In this work, we consider the mass in mass (or mass with mass) system of granular chains, namely, a granular chain involving additionally an internal (or, respectively, external) resonator. For these chains, we rigorously establish that under suitable “anti-resonance” conditions connecting the mass of the resonator and the speed of the wave, bell-shaped traveling-wave solutions continue to exist in the system, in a way reminiscent of the results proven for the standard granular chain of elastic Hertzian contacts. Finally, we also numerically touch upon settings, where the conditions do not hold, illustrating, in line also with recent experimental work, thatmore » non-monotonic waves bearing non-vanishing tails may exist in the latter case.« less

  14. Traveling waves for the mass in mass model of granular chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Panayotis G.; Stefanov, Atanas G.; Xu, Haitao

    In this work, we consider the mass in mass (or mass with mass) system of granular chains, namely, a granular chain involving additionally an internal (or, respectively, external) resonator. For these chains, we rigorously establish that under suitable “anti-resonance” conditions connecting the mass of the resonator and the speed of the wave, bell-shaped traveling-wave solutions continue to exist in the system, in a way reminiscent of the results proven for the standard granular chain of elastic Hertzian contacts. Finally, we also numerically touch upon settings, where the conditions do not hold, illustrating, in line also with recent experimental work, thatmore » non-monotonic waves bearing non-vanishing tails may exist in the latter case.« less

  15. Model calibration for ice sheets and glaciers dynamics: a general theory of inverse problems in glaciology

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura

    2014-05-01

    Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed, whereas only the model output should depend on the subset of the parameters that can be identified with the calibration procedure and the solution to the IP. It is actually difficult to guarantee the existence and uniqueness of a solution to the IP for complex non-linear models. Also identifiability, a property related to the solution to the FP, and resolution should be carefully considered. Moreover, instability of the IP should not be confused with ill-conditioning and with the properties of the method applied to compute a solution. Finally, sensitivity analysis is of paramount importance to assess the reliability of the estimated parameters and of the model output, but it is often based on the one-at-a-time approach, through the application of the adjoint-state method, to compute local sensitivity, i.e. the uncertainty on the model output due to small variations of the input parameters, whereas first-order approaches that consider the whole possible variability of the model parameters should be considered. This theoretical framework and the relevant properties are illustrated by means of a simple numerical example of isothermal ice flow, based on the shallow ice approximation.

  16. What's with all this peer-review stuff anyway?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner, J. S.

    2010-01-01

    The Journal of Physical Security was ostensibly started to deal with a perceived lack of peer-reviewed journals related to the field of physical security. In fact, concerns have been expressed that the field of physical security is scarcely a field at all. A typical, well-developed field might include the following: multiple peer-reviewed journals devoted to the subject, rigor and critical thinking, metrics, fundamental principles, models and theories, effective standards and guidelines, R and D conferences, professional societies, certifications, its own academic department (or at least numerous academic experts), widespread granting of degrees in the field from 4-year research universities, mechanismsmore » for easily spotting 'snake oil' products and services, and the practice of professionals organizing to police themselves, provide quality control, and determine best practices. Physical Security seems to come up short in a number of these areas. Many of these attributes are difficult to quantify. This paper seeks to focus on one area that is quantifiable: the number of peer-reviewed journals dedicated to the field of Physical Security. In addition, I want to examine the number of overall periodicals (peer-reviewed and non-peer-reviewed) dedicated to physical security, as well as the number of papers published each year about physical security. These are potentially useful analyses because one can often infer how healthy or active a given field is by its publishing activity. For example, there are 2,754 periodicals dedicated to the (very healthy and active) field of physics. This paper concentrates on trade journal versus peer-reviewed journals. Trade journals typically focus on practice-related topics. A paper appropriate for a trade journal is usually based more on practical experience than rigorous studies or research. Models, theories, or rigorous experimental research results will usually not be included. A trade journal typically targets a specific market in an industry or trade. Such journals are often considered to be news magazines and may contain industry specific advertisements and/or job ads. A peer-reviewed journal, a.k.a 'referred journal', in contrast, contains peer-reviewed papers. A peer-reviewed paper is one that has been vetted by the peer review process. In this process, the paper is typically sent to independent experts for review and consideration. A peer-reviewed paper might cover experimental results, and/or a rigorous study, analyses, research efforts, theory, models, or one of many other scholarly endeavors.« less

  17. Peer Assessment with Online Tools to Improve Student Modeling

    ERIC Educational Resources Information Center

    Atkins, Leslie J.

    2012-01-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to…

  18. Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho

    Treesearch

    Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin

    2010-01-01

    Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...

  19. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  20. Buckling of a stiff thin film on an elastic graded compliant substrate.

    PubMed

    Chen, Zhou; Chen, Weiqiu; Song, Jizhou

    2017-12-01

    The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.

  1. DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk

    The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.

  2. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  3. Toroidal gyrofluid equations for simulations of tokamak turbulence

    NASA Astrophysics Data System (ADS)

    Beer, M. A.; Hammett, G. W.

    1996-11-01

    A set of nonlinear gyrofluid equations for simulations of tokamak turbulence are derived by taking moments of the nonlinear toroidal gyrokinetic equation. The moment hierarchy is closed with approximations that model the kinetic effects of parallel Landau damping, toroidal drift resonances, and finite Larmor radius effects. These equations generalize the work of Dorland and Hammett [Phys. Fluids B 5, 812 (1993)] to toroidal geometry by including essential toroidal effects. The closures for phase mixing from toroidal ∇B and curvature drifts take the basic form presented in Waltz et al. [Phys. Fluids B 4, 3138 (1992)], but here a more rigorous procedure is used, including an extension to higher moments, which provides significantly improved accuracy. In addition, trapped ion effects and collisions are incorporated. This reduced set of nonlinear equations accurately models most of the physics considered important for ion dynamics in core tokamak turbulence, and is simple enough to be used in high resolution direct numerical simulations.

  4. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  5. Statistical mechanics of an ideal active fluid confined in a channel

    NASA Astrophysics Data System (ADS)

    Wagner, Caleb; Baskaran, Aparna; Hagan, Michael

    The statistical mechanics of ideal active Brownian particles (ABPs) confined in a channel is studied by obtaining the exact solution of the steady-state Smoluchowski equation for the 1-particle distribution function. The solution is derived using results from the theory of two-way diffusion equations, combined with an iterative procedure that is justified by numerical results. Using this solution, we quantify the effects of confinement on the spatial and orientational order of the ensemble. Moreover, we rigorously show that both the bulk density and the fraction of particles on the channel walls obey simple scaling relations as a function of channel width. By considering a constant-flux steady state, an effective diffusivity for ABPs is derived which shows signatures of the persistent motion that characterizes ABP trajectories. Finally, we discuss how our techniques generalize to other active models, including systems whose activity is modeled in terms of an Ornstein-Uhlenbeck process.

  6. Buckling of a stiff thin film on an elastic graded compliant substrate

    NASA Astrophysics Data System (ADS)

    Chen, Zhou; Chen, Weiqiu; Song, Jizhou

    2017-12-01

    The buckling of a stiff film on a compliant substrate has attracted much attention due to its wide applications such as thin-film metrology, surface patterning and stretchable electronics. An analytical model is established for the buckling of a stiff thin film on a semi-infinite elastic graded compliant substrate subjected to in-plane compression. The critical compressive strain and buckling wavelength for the sinusoidal mode are obtained analytically for the case with the substrate modulus decaying exponentially. The rigorous finite element analysis (FEA) is performed to validate the analytical model and investigate the postbuckling behaviour of the system. The critical buckling strain for the period-doubling mode is obtained numerically. The influences of various material parameters on the results are investigated. These results are helpful to provide physical insights on the buckling of elastic graded substrate-supported thin film.

  7. Mathematical analysis of the boundary-integral based electrostatics estimation approximation for molecular solvation: exact results for spherical inclusions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2011-09-28

    We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics

  8. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  9. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  10. Numerical Approach for Goaf-Side Entry Layout and Yield Pillar Design in Fractured Ground Conditions

    NASA Astrophysics Data System (ADS)

    Jiang, Lishuai; Zhang, Peipeng; Chen, Lianjun; Hao, Zhen; Sainoki, Atsushi; Mitri, Hani S.; Wang, Qingbiao

    2017-11-01

    Entry driven along goaf-side (EDG), which is the development of an entry of the next longwall panel along the goaf-side and the isolation of the entry from the goaf with a small-width yield pillar, has been widely employed in China over the past several decades . The width of such a yield pillar has a crucial effect on EDG layout in terms of the ground control, isolation effect and resource recovery rate. Based on a case study, this paper presents an approach for evaluating, designing and optimizing EDG and yield pillar by considering the results from numerical simulations and field practice. To rigorously analyze the ground stability, the numerical study begins with the simulation of goaf-side stress and ground conditions. Four global models with identical conditions, except for the width of the yield pillar, are built, and the effect of pillar width on ground stability is investigated by comparing aspects of stress distribution, failure propagation, and displacement evolution during the entire service life of the entry. Based on simulation results, the isolation effect of the pillar acquired from field practice is also considered. The suggested optimal yield pillar design is validated using a field test in the same mine. Thus, the presented numerical approach provides references and can be utilized for the evaluation, design and optimization of EDG and yield pillars under similar geological and geotechnical circumstances.

  11. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    DOE PAGES

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less

  12. Stability of Viscous St. Venant Roll Waves: From Onset to Infinite Froude Number Limit

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Johnson, Mathew A.; Noble, Pascal; Rodrigues, L. Miguel; Zumbrun, Kevin

    2017-02-01

    We study the spectral stability of roll wave solutions of the viscous St. Venant equations modeling inclined shallow water flow, both at onset in the small Froude number or "weakly unstable" limit F→ 2^+ and for general values of the Froude number F, including the limit F→ +∞ . In the former, F→ 2^+, limit, the shallow water equations are formally approximated by a Korteweg-de Vries/Kuramoto-Sivashinsky (KdV-KS) equation that is a singular perturbation of the standard Korteweg-de Vries (KdV) equation modeling horizontal shallow water flow. Our main analytical result is to rigorously validate this formal limit, showing that stability as F→ 2^+ is equivalent to stability of the corresponding KdV-KS waves in the KdV limit. Together with recent results obtained for KdV-KS by Johnson-Noble-Rodrigues-Zumbrun and Barker, this gives not only the first rigorous verification of stability for any single viscous St. Venant roll wave, but a complete classification of stability in the weakly unstable limit. In the remainder of the paper, we investigate numerically and analytically the evolution of the stability diagram as Froude number increases to infinity. Notably, we find transition at around F=2.3 from weakly unstable to different, large- F behavior, with stability determined by simple power-law relations. The latter stability criteria are potentially useful in hydraulic engineering applications, for which typically 2.5≤ F≤ 6.0.

  13. Advantages and Disadvantages of Weighted Grading. Research Brief

    ERIC Educational Resources Information Center

    Walker, Karen

    2004-01-01

    What are the advantages and disadvantages of weighted grading? The primary purpose of weighted grading has been to encourage high school students to take more rigorous courses. This effort is then acknowledged by more weight being given to the grade for a specified class. There are numerous systems of weighted grading cited in the literature from…

  14. Hertzian Dipole Radiation over Isotropic Magnetodielectric Substrates

    DTIC Science & Technology

    2015-03-01

    Analytical and numerical techniques in the Green’s function treatment of microstrip antennas and scatterers. IEE Proceedings. March 1983:130(2). 3...public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report investigates dipole antennas printed on grounded...engineering of thin planar antennas . Since these materials often require complicated constitutive equations to describe their properties rigorously, the

  15. Can High School Assessments Predict Developmental Education Enrollment in New Mexico?

    ERIC Educational Resources Information Center

    Weldon, Tyler L.

    2013-01-01

    Thousands of American's enter postsecondary institutions every year and many are under prepared for college-level work. Subsequently, students enroll in or are placed in remedial courses in preparation for the rigor of college level classes. Numerous studies have looked at the impact of developmental course work on student outcomes, but few focus…

  16. Weathering the Storms: Acknowledging Challenges to Learning in Times of Stress

    ERIC Educational Resources Information Center

    Hubschman, Betty; Lutz, Marilyn; King, Christine; Wang, Jia; Kopp, David

    2006-01-01

    Students and faculty have had numerous disruptions this academic year with Hurricanes Katrina, Rita, and Wilma developing into major stressors. During this innovative session, we will examine some of the challenges and strategies used by faculty to work with students to maintain empathy and academic rigor in times of stress and disruption, and…

  17. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  18. Understanding the seismic wave propagation inside and around an underground cavity from a 3D numerical survey

    NASA Astrophysics Data System (ADS)

    Esterhazy, Sofi; Schneider, Felix; Perugia, Ilaria; Bokelmann, Götz

    2017-04-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explosion/weapon testing, we aim to provide a basic numerical study of the wave propagation around and inside such an underground cavity. One method to investigate the geophysical properties of an underground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as "resonance seismometry" - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and so far, there are only very few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in numerical modeling of wave propagation problems. Our numerical study includes the full elastic wave field in three dimensions. We consider the effects from an incoming plane wave as well as point source located in the surrounding of the cavity at the surface. While the former can be considered as passive source like a tele-seismic earthquake, the latter represents a man-made explosion or a viborseis as used for/in active seismic techniques. Further we want to demonstrate the specific characteristics of the scattered wave field from a P-waves and S-wave separately. For our simulations in 3D we use the discontinuous Galerkin Spectral Element Code SPEED developed by MOX (The Laboratory for Modeling and Scientific Computing, Department of Mathematics) and DICA (Department of Civil and Environmental Engineering) at the Politecnico di Milano. The computations are carried out on the Vienna Scientific Cluster (VSC). The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  19. Intermittent Fasting: Is the Wait Worth the Weight?

    PubMed

    Stockman, Mary-Catherine; Thomas, Dylan; Burke, Jacquelyn; Apovian, Caroline M

    2018-06-01

    We review the underlying mechanisms and potential benefits of intermittent fasting (IF) from animal models and recent clinical trials. Numerous variations of IF exist, and study protocols vary greatly in their interpretations of this weight loss trend. Most human IF studies result in minimal weight loss and marginal improvements in metabolic biomarkers, though outcomes vary. Some animal models have found that IF reduces oxidative stress, improves cognition, and delays aging. Additionally, IF has anti-inflammatory effects, promotes autophagy, and benefits the gut microbiome. The benefit-to-harm ratio varies by model, IF protocol, age at initiation, and duration. We provide an integrated perspective on potential benefits of IF as well as key areas for future investigation. In clinical trials, caloric restriction and IF result in similar degrees of weight loss and improvement in insulin sensitivity. Although these data suggest that IF may be a promising weight loss method, IF trials have been of moderate sample size and limited duration. More rigorous research is needed.

  20. An operational global-scale ocean thermal analysis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.

    1990-04-01

    The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less

  1. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  2. Estimating gravitational radiation from super-emitting compact binary systems

    NASA Astrophysics Data System (ADS)

    Hanna, Chad; Johnson, Matthew C.; Lehner, Luis

    2017-06-01

    Binary black hole mergers are among the most violent events in the Universe, leading to extreme warping of spacetime and copious emission of gravitational radiation. Even though black holes are the most compact objects they are not necessarily the most efficient emitters of gravitational radiation in binary systems. The final black hole resulting from a binary black hole merger retains a significant fraction of the premerger orbital energy and angular momentum. A nonvacuum system can in principle shed more of this energy than a black hole merger of equivalent mass. We study these super-emitters through a toy model that accounts for the possibility that the merger creates a compact object that retains a long-lived time-varying quadrupole moment. This toy model may capture the merger of (low mass) neutron stars, but it may also be used to consider more exotic compact binaries. We hope that this toy model can serve as a guide to more rigorous numerical investigations into these systems.

  3. A Riemann solver for single-phase and two-phase shallow flow models based on relaxation. Relations with Roe and VFRoe solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelanti, Marica, E-mail: Marica.Pelanti@ens.f; Bouchut, Francois, E-mail: francois.bouchut@univ-mlv.f; Mangeney, Anne, E-mail: mangeney@ipgp.jussieu.f

    2011-02-01

    We present a Riemann solver derived by a relaxation technique for classical single-phase shallow flow equations and for a two-phase shallow flow model describing a mixture of solid granular material and fluid. Our primary interest is the numerical approximation of this two-phase solid/fluid model, whose complexity poses numerical difficulties that cannot be efficiently addressed by existing solvers. In particular, we are concerned with ensuring a robust treatment of dry bed states. The relaxation system used by the proposed solver is formulated by introducing auxiliary variables that replace the momenta in the spatial gradients of the original model systems. The resultingmore » relaxation solver is related to Roe solver in that its Riemann solution for the flow height and relaxation variables is formally computed as Roe's Riemann solution. The relaxation solver has the advantage of a certain degree of freedom in the specification of the wave structure through the choice of the relaxation parameters. This flexibility can be exploited to handle robustly vacuum states, which is a well known difficulty of standard Roe's method, while maintaining Roe's low diffusivity. For the single-phase model positivity of flow height is rigorously preserved. For the two-phase model positivity of volume fractions in general is not ensured, and a suitable restriction on the CFL number might be needed. Nonetheless, numerical experiments suggest that the proposed two-phase flow solver efficiently models wet/dry fronts and vacuum formation for a large range of flow conditions. As a corollary of our study, we show that for single-phase shallow flow equations the relaxation solver is formally equivalent to the VFRoe solver with conservative variables of Gallouet and Masella [T. Gallouet, J.-M. Masella, Un schema de Godunov approche C.R. Acad. Sci. Paris, Serie I, 323 (1996) 77-84]. The relaxation interpretation allows establishing positivity conditions for this VFRoe method.« less

  4. On Discontinuous Piecewise Linear Models for Memristor Oscillators

    NASA Astrophysics Data System (ADS)

    Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier

    2017-06-01

    In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.

  5. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  6. Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production

    NASA Technical Reports Server (NTRS)

    Whitlow, Jonathan E.

    2000-01-01

    This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.

  7. Crystal Growth and Fluid Mechanics Problems in Directional Solidification

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh A.; Baker, Gregory R.; Foster, Michael R.

    2001-01-01

    Our work in directional solidification has been in the following areas: (1) Dynamics of dendrites including rigorous mathematical analysis of the resulting equations; (2) Examination of the near-structurally unstable features of the mathematically related Hele-Shaw dynamics; (3) Numerical studies of steady temperature distribution in a vertical Bridgman device; (4) Numerical study of transient effects in a vertical Bridgman device; (5) Asymptotic treatment of quasi-steady operation of a vertical Bridgman furnace for large Rayleigh numbers and small Biot number in 3D; and (6) Understanding of Mullins-Sererka transition in a Bridgman device with fluid dynamics is accounted for.

  8. Numerical Inverse Scattering for the Toda Lattice

    NASA Astrophysics Data System (ADS)

    Bilman, Deniz; Trogdon, Thomas

    2017-06-01

    We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.

  9. An efficient numerical procedure for thermohydrodynamic analysis of cavitating bearings

    NASA Technical Reports Server (NTRS)

    Vijayaraghavan, D.

    1995-01-01

    An efficient and accurate numerical procedure to determine the thermo-hydrodynamic performance of cavitating bearings is described. This procedure is based on the earlier development of Elrod for lubricating films, in which the properties across the film thickness are determined at Lobatto points and their distributions are expressed by collocated polynomials. The cavitated regions and their boundaries are rigorously treated. Thermal boundary conditions at the surfaces, including heat dissipation through the metal to the ambient, are incorporated. Numerical examples are presented comparing the predictions using this procedure with earlier theoretical predictions and experimental data. With a few points across the film thickness and across the journal and the bearing in the radial direction, the temperature profile is very well predicted.

  10. Two-Phase Flow Model and Experimental Validation for Bubble Augmented Waterjet Propulsion Nozzle

    NASA Astrophysics Data System (ADS)

    Choi, J.-K.; Hsiao, C.-T.; Wu, X.; Singh, S.; Jayaprakash, A.; Chahine, G.

    2011-11-01

    The concept of thrust augmentation through bubble injection into a waterjet has been the subject of many patents and publications over the past several decades, and there are simplified computational and experimental evidence of thrust increase. In this work, we present more rigorous numerical and experimental studies which aim at investigating two-phase water jet propulsion systems. The numerical model is based on a Lagrangian-Eulerian method, which considers the bubbly mixture flow both in the microscopic level where individual bubble dynamics are tracked and in the macroscopic level where bubbles are collectively described by the local void fraction of the mixture. DYNAFLOW's unsteady RANS solver, 3DYNAFS-Vis is used to solve the macro level variable density mixture medium, and a fully unsteady two-way coupling between this and the bubble dynamics/tracking code 3DYNAFS-DSM is utilized. Validation studies using measurements in a half 3-D experimental setup composed of divergent and convergent sections are presented. Visualization of the bubbles, PIV measurements of the flow, bubble size and behavior are observed, and the measured flow field data are used to validate the models. Thrust augmentation as high as 50% could be confirmed both by predictions and by experiments. This work was supported by the Office of Naval Research under the contract N00014-07-C-0427, monitored by Dr. Ki-Han Kim.

  11. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    NASA Astrophysics Data System (ADS)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  12. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    PubMed Central

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  13. Optimization of nonbinary slanted surface-relief gratings as high-efficiency broadband couplers for light guides.

    PubMed

    Bai, Benfeng; Laukkanen, Janne; Kuittinen, Markku; Siitonen, Samuli

    2010-10-01

    We propose and investigate the use of slanted surface-relief gratings with nonbinary profiles as high-efficiency broadband couplers for light guides. First, a Chandezon-method-based rigorous numerical formulation is presented for modeling the slanted gratings with overhanging profiles. Then, two typical types of slanted grating couplers--a sinusoidal one and a trapezoidal one--are studied and optimized numerically, both exhibiting a high coupling efficiency of over 50% over the full band of white LED under the normal illumination of unpolarized light. Reasonable structural parameters with nice tolerance have been obtained for the optimized designs. It is found that the performance of the couplers depends little on the grating profile shape, but primarily on the grating period and the slant angle of the ridge. The underlying mechanism is analyzed by the equivalence rules of gratings, which provide useful guidelines for the design and fabrication of the couplers. Preliminary investigation has been performed on the fabrication and replication of the slanted overhanging grating couplers, which shows the feasibility of fabrication with mature microfabrication techniques and the perspective for mass production.

  14. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    NASA Astrophysics Data System (ADS)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  15. Combined Numerical/Analytical Perturbation Solutions of the Navier-Stokes Equations for Aerodynamic Ejector/Mixer Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DeChant, Lawrence Justin

    1998-01-01

    In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.

  16. The sympathy of two pendulum clocks: beyond Huygens' observations.

    PubMed

    Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin

    2016-03-29

    This paper introduces a modern version of the classical Huygens' experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks--ad hoc designed and fabricated--which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit 'sympathetic' motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated.

  17. Simulations of NLC formation using a microphysical model driven by three-dimensional dynamics

    NASA Astrophysics Data System (ADS)

    Kirsch, Annekatrin; Becker, Erich; Rapp, Markus; Megner, Linda; Wilms, Henrike

    2014-05-01

    Noctilucent clouds (NLCs) represent an optical phenomenon occurring in the polar summer mesopause region. These clouds have been known since the late 19th century. Current physical understanding of NLCs is based on numerous observational and theoretical studies, in recent years especially observations from satellites and by lidars from ground. Theoretical studies based on numerical models that simulate NLCs with the underlying microphysical processes are uncommon. Up to date no three-dimensional numerical simulations of NLCs exist that take all relevant dynamical scales into account, i.e., from the planetary scale down to gravity waves and turbulence. Rather, modeling is usually restricted to certain flow regimes. In this study we make a more rigorous attempt and simulate NLC formation in the environment of the general circulation of the mesopause region by explicitly including gravity waves motions. For this purpose we couple the Community Aerosol and Radiation Model for Atmosphere (CARMA) to gravity-wave resolving dynamical fields simulated beforehand with the Kuehlungsborn Mechanistic Circulation Model (KMCM). In our case, the KMCM is run with a horizontal resolution of T120 which corresponds to a minimum horizontal wavelength of 350 km. This restriction causes the resolved gravity waves to be somewhat biased to larger scales. The simulated general circulation is dynamically controlled by these waves in a self-consitent fashion and provides realistic temperatures and wind-fields for July conditions. Assuming a water vapor mixing ratio profile in agreement with current observations results in reasonable supersaturations of up to 100. In a first step, CARMA is applied to a horizontal section covering the Northern hemisphere. The vertical resolution is 120 levels ranging from 72 to 101 km. In this paper we will present initial results of this coupled dynamical microphysical model focussing on the interaction of waves and turbulent diffusion with NLC-microphysics.

  18. A new framework for climate sensitivity and prediction: a modelling perspective

    NASA Astrophysics Data System (ADS)

    Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank

    2016-03-01

    The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.

  19. High School Opportunities for STEM: Comparing Inclusive STEM-Focused and Comprehensive High Schools in Two US Cities

    ERIC Educational Resources Information Center

    Eisenhart, Margaret; Weis, Lois; Allen, Carrie D.; Cipollone, Kristin; Stich, Amy; Dominguez, Rachel

    2015-01-01

    In response to numerous calls for more rigorous STEM (science, technology, engineering, and mathematics) education to improve US competitiveness and the job prospects of next-generation workers, especially those from low-income and minority groups, a growing number of schools emphasizing STEM have been established in the US over the past decade.…

  20. Moment method analysis of linearly tapered slot antennas

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan

    1993-01-01

    A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.

  1. Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu

    2017-10-01

    In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.

  2. Exact solutions for the static bending of Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model

    NASA Astrophysics Data System (ADS)

    Wang, Y. B.; Zhu, X. W.; Dai, H. H.

    2016-08-01

    Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.

  3. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  4. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Tianfeng

    The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily basedmore » on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.« less

  6. Evaluation of the fast orthogonal search method for forecasting chloride levels in the Deltona groundwater supply (Florida, USA)

    NASA Astrophysics Data System (ADS)

    El-Jaat, Majda; Hulley, Michael; Tétreault, Michel

    2018-02-01

    Despite the broad impact and importance of saltwater intrusion in coastal aquifers, little research has been directed towards forecasting saltwater intrusion in areas where the source of saltwater is uncertain. Saline contamination in inland groundwater supplies is a concern for numerous communities in the southern US including the city of Deltona, Florida. Furthermore, conventional numerical tools for forecasting saltwater contamination are heavily dependent on reliable characterization of the physical characteristics of underlying aquifers, information that is often absent or challenging to obtain. To overcome these limitations, a reliable alternative data-driven model for forecasting salinity in a groundwater supply was developed for Deltona using the fast orthogonal search (FOS) method. FOS was applied on monthly water-demand data and corresponding chloride concentrations at water supply wells. Groundwater salinity measurements from Deltona water supply wells were applied to evaluate the forecasting capability and accuracy of the FOS model. Accurate and reliable groundwater salinity forecasting is necessary to support effective and sustainable coastal-water resource planning and management. The available (27) water supply wells for Deltona were randomly split into three test groups for the purposes of FOS model development and performance assessment. Based on four performance indices (RMSE, RSR, NSEC, and R), the FOS model proved to be a reliable and robust forecaster of groundwater salinity. FOS is relatively inexpensive to apply, is not based on rigorous physical characterization of the water supply aquifer, and yields reliable estimates of groundwater salinity in active water supply wells.

  7. Rotation and anisotropy of galaxies revisited

    NASA Astrophysics Data System (ADS)

    Binney, James

    2005-11-01

    The use of the tensor virial theorem (TVT) as a diagnostic of anisotropic velocity distributions in galaxies is revisited. The TVT provides a rigorous global link between velocity anisotropy, rotation and shape, but the quantities appearing in it are not easily estimated observationally. Traditionally, use has been made of a centrally averaged velocity dispersion and the peak rotation velocity. Although this procedure cannot be rigorously justified, tests on model galaxies show that it works surprisingly well. With the advent of integral-field spectroscopy it is now possible to establish a rigorous connection between the TVT and observations. The TVT is reformulated in terms of sky-averages, and the new formulation is tested on model galaxies.

  8. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  9. Fire Suppression in Low Gravity Using a Cup Burner

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2004-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches. The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.

  10. Fire Suppression in Low Gravity Using a Cup Burner

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2004-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station increase the likelihood of accidental fires. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of numerical models, which include detailed combustion-suppression chemistry and radiation sub-models; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.The structure and extinguishment of enclosed, laminar, methane-air co-flow diffusion flames formed on a cup burner have been studied experimentally and numerically using various fire-extinguishing agents (CO2, N2, He, Ar, CF3H, and Fe(CO)5). The experiments involve both 1g laboratory testing and low-g testing (in drop towers and the KC-135 aircraft). The computation uses a direct numerical simulation with detailed chemistry and radiative heat-loss models. An agent was introduced into a low-speed coflowing oxidizing stream until extinguishment occurred under a fixed minimal fuel velocity, and thus, the extinguishing agent concentrations were determined. The extinguishment of cup-burner flames, which resemble real fires, occurred via a blowoff process (in which the flame base drifted downstream) rather than the global extinction phenomenon typical of counterflow diffusion flames. The computation revealed that the peak reactivity spot (the reaction kernel) formed in the flame base was responsible for attachment and blowoff of the trailing diffusion flame. Furthermore, the buoyancy-induced flame flickering in 1g and thermal and transport properties of the agents affected the flame extinguishment limits.

  11. Rigor of cell fate decision by variable p53 pulses and roles of cooperative gene expression by p53

    PubMed Central

    Murakami, Yohei; Takada, Shoji

    2012-01-01

    Upon DNA damage, the cell fate decision between survival and apoptosis is largely regulated by p53-related networks. Recent experiments found a series of discrete p53 pulses in individual cells, which led to the hypothesis that the cell fate decision upon DNA damage is controlled by counting the number of p53 pulses. Under this hypothesis, Sun et al. (2009) modeled the Bax activation switch in the apoptosis signal transduction pathway that can rigorously “count” the number of uniform p53 pulses. Based on experimental evidence, here we use variable p53 pulses with Sun et al.’s model to investigate how the variability in p53 pulses affects the rigor of the cell fate decision by the pulse number. Our calculations showed that the experimentally anticipated variability in the pulse sizes reduces the rigor of the cell fate decision. In addition, we tested the roles of the cooperativity in PUMA expression by p53, finding that lower cooperativity is plausible for more rigorous cell fate decision. This is because the variability in the p53 pulse height is more amplified in PUMA expressions with more cooperative cases. PMID:27857606

  12. A Penalty Method for the Numerical Solution of Hamilton-Jacobi-Bellman (HJB) Equations in Finance

    NASA Astrophysics Data System (ADS)

    Witte, J. H.; Reisinger, C.

    2010-09-01

    We present a simple and easy to implement method for the numerical solution of a rather general class of Hamilton-Jacobi-Bellman (HJB) equations. In many cases, the considered problems have only a viscosity solution, to which, fortunately, many intuitive (e.g. finite difference based) discretisations can be shown to converge. However, especially when using fully implicit time stepping schemes with their desireable stability properties, one is still faced with the considerable task of solving the resulting nonlinear discrete system. In this paper, we introduce a penalty method which approximates the nonlinear discrete system to an order of O(1/ρ), where ρ>0 is the penalty parameter, and we show that an iterative scheme can be used to solve the penalised discrete problem in finitely many steps. We include a number of examples from mathematical finance for which the described approach yields a rigorous numerical scheme and present numerical results.

  13. Design and analysis of a fast, two-mirror soft-x-ray microscope

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Wang, C.; Jiang, W.; Jin, L.; Hoover, R. B.

    1992-01-01

    During the past several years, a number of investigators have addressed the design, analysis, fabrication, and testing of spherical Schwarzschild microscopes for soft-x-ray applications using multilayer coatings. Some of these systems have demonstrated diffraction limited resolution for small numerical apertures. Rigorously aplanatic, two-aspherical mirror Head microscopes can provide near diffraction limited resolution for very large numerical apertures. The relationships between the numerical aperture, mirror radii and diameters, magnifications, and total system length for Schwarzschild microscope configurations are summarized. Also, an analysis of the characteristics of the Head-Schwarzschild surfaces will be reported. The numerical surface data predicted by the Head equations were fit by a variety of functions and analyzed by conventional optical design codes. Efforts have been made to determine whether current optical substrate and multilayer coating technologies will permit construction of a very fast Head microscope which can provide resolution approaching that of the wavelength of the incident radiation.

  14. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  15. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    PubMed

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  16. Towards a Credibility Assessment of Models and Simulations

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Green, Lawrence L.; Luckring, James M.; Morrison, Joseph H.; Tripathi, Ram K.; Zang, Thomas A.

    2008-01-01

    A scale is presented to evaluate the rigor of modeling and simulation (M&S) practices for the purpose of supporting a credibility assessment of the M&S results. The scale distinguishes required and achieved levels of rigor for a set of M&S elements that contribute to credibility including both technical and process measures. The work has its origins in an interest within NASA to include a Credibility Assessment Scale in development of a NASA standard for models and simulations.

  17. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    PubMed Central

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-01-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570

  18. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-07-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.

  19. Improved Finite Element Modeling of the Turbofan Engine Inlet Radiation Problem

    NASA Technical Reports Server (NTRS)

    Roy, Indranil Danda; Eversman, Walter; Meyer, H. D.

    1993-01-01

    Improvements have been made in the finite element model of the acoustic radiated field from a turbofan engine inlet in the presence of a mean flow. The problem of acoustic radiation from a turbofan engine inlet is difficult to model numerically because of the large domain and high frequencies involved. A numerical model with conventional finite elements in the near field and wave envelope elements in the far field has been constructed. By employing an irrotational mean flow assumption, both the mean flow and the acoustic perturbation problem have been posed in an axisymmetric formulation in terms of the velocity potential; thereby minimizing computer storage and time requirements. The finite element mesh has been altered in search of an improved solution. The mean flow problem has been reformulated with new boundary conditions to make it theoretically rigorous. The sound source at the fan face has been modeled as a combination of positive and negative propagating duct eigenfunctions. Therefore, a finite element duct eigenvalue problem has been solved on the fan face and the resulting modal matrix has been used to implement a source boundary condition on the fan face in the acoustic radiation problem. In the post processing of the solution, the acoustic pressure has been evaluated at Gauss points inside the elements and the nodal pressure values have been interpolated from them. This has significantly improved the results. The effect of the geometric position of the transition circle between conventional finite elements and wave envelope elements has been studied and it has been found that the transition can be made nearer to the inlet than previously assumed.

  20. Nonconstant Positive Steady States and Pattern Formation of 1D Prey-Taxis Systems

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Song, Yang; Shao, Lingjie

    2017-02-01

    Prey-taxis is the process that predators move preferentially toward patches with highest density of prey. It is well known to have an important role in biological control and the maintenance of biodiversity. To model the coexistence and spatial distributions of predator and prey species, this paper concerns nonconstant positive steady states of a wide class of prey-taxis systems with general functional responses over 1D domain. Linearized stability of the positive equilibrium is analyzed to show that prey-taxis destabilizes prey-predator homogeneity when prey repulsion (e.g., due to volume-filling effect in predator species or group defense in prey species) is present, and prey-taxis stabilizes the homogeneity otherwise. Then, we investigate the existence and stability of nonconstant positive steady states to the system through rigorous bifurcation analysis. Moreover, we provide detailed and thorough calculations to determine properties such as pitchfork and turning direction of the local branches. Our stability results also provide a stable wave mode selection mechanism for thee reaction-advection-diffusion systems including prey-taxis models considered in this paper. Finally, we provide numerical studies of prey-taxis systems with Holling-Tanner kinetics to illustrate and support our theoretical findings. Our numerical simulations demonstrate that the 2× 2 prey-taxis system is able to model the formation and evolution of various striking patterns, such as spikes, periodic oscillations, and coarsening even when the domain is one-dimensional. These dynamics can model the coexistence and spatial distributions of interacting prey and predator species. We also give some insights on how system parameters influence pattern formation in these models.

  1. Finite element models of the human shoulder complex: a review of their clinical implications and modelling techniques.

    PubMed

    Zheng, Manxu; Zou, Zhenmin; Bartolo, Paulo Jorge Da Silva; Peach, Chris; Ren, Lei

    2017-02-01

    The human shoulder is a complicated musculoskeletal structure and is a perfect compromise between mobility and stability. The objective of this paper is to provide a thorough review of previous finite element (FE) studies in biomechanics of the human shoulder complex. Those FE studies to investigate shoulder biomechanics have been reviewed according to the physiological and clinical problems addressed: glenohumeral joint stability, rotator cuff tears, joint capsular and labral defects and shoulder arthroplasty. The major findings, limitations, potential clinical applications and modelling techniques of those FE studies are critically discussed. The main challenges faced in order to accurately represent the realistic physiological functions of the shoulder mechanism in FE simulations involve (1) subject-specific representation of the anisotropic nonhomogeneous material properties of the shoulder tissues in both healthy and pathological conditions; (2) definition of boundary and loading conditions based on individualised physiological data; (3) more comprehensive modelling describing the whole shoulder complex including appropriate three-dimensional (3D) representation of all major shoulder hard tissues and soft tissues and their delicate interactions; (4) rigorous in vivo experimental validation of FE simulation results. Fully validated shoulder FE models would greatly enhance our understanding of the aetiology of shoulder disorders, and hence facilitate the development of more efficient clinical diagnoses, non-surgical and surgical treatments, as well as shoulder orthotics and prosthetics. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  2. Peer Review of EPA's Draft BMDS Document: Exponential ...

    EPA Pesticide Factsheets

    BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.

  3. A Regional Seismic Travel Time Model for North America

    DTIC Science & Technology

    2010-09-01

    velocity at the Moho, the mantle velocity gradient, and the average crustal velocity. After tomography across Eurasia, rigorous tests find that Pn...velocity gradient, and the average crustal velocity. After tomography across Eurasia rigorous tests find that Pn travel time residuals are reduced...and S-wave velocity in the crustal layers and in the upper mantle. A good prior model is essential because the RSTT tomography inversion is invariably

  4. Numerical Simulations of Turbulent Trapping in the Weak Beam-Plasma Instability

    DTIC Science & Technology

    1986-06-05

    vdi -{l- cos^)a2 ) H(^, u) = cose, (33) where 6r = 0(’yi,^/uti) symbolizes the importance of the time derivative: Eq.(33) is in fact rigorous only...Spectra, Dover, 1985. TABLE 1: Simulation Parameters. Simulation 1 2 ^’, X 7v„ 2048 X 200 1024 X 250 Az At; At 0.5 0.0232 0.2 0.5 0.00723

  5. Impact of insects on multiple-use values of north-central forests: an experimental rating scheme.

    Treesearch

    Norton D. Addy; Harold O. Batzer; William J. Mattson; William E. Miller

    1971-01-01

    Ranking or assigning priorities to problems is an essential step in research problem selection. Up to now, no rigorous basis for ranking forest insects has been available. We evaluate and rank forest insects with a systematic numerical scheme that considers insect impact on the multiple-use values of timber, wildlife, recreation, and water. The result is a better...

  6. Multiscale model within-host and between-host for viral infectious diseases.

    PubMed

    Almocera, Alexis Erich S; Nguyen, Van Kinh; Hernandez-Vargas, Esteban A

    2018-05-08

    Multiscale models possess the potential to uncover new insights into infectious diseases. Here, a rigorous stability analysis of a multiscale model within-host and between-host is presented. The within-host model describes viral replication and the respective immune response while disease transmission is represented by a susceptible-infected model. The bridging of scales from within- to between-host considered transmission as a function of the viral load. Consequently, stability and bifurcation analyses were developed coupling the two basic reproduction numbers [Formula: see text] and [Formula: see text] for the within- and the between-host subsystems, respectively. Local stability results for each subsystem, including a unique stable equilibrium point, recapitulate classical approaches to infection and epidemic control. Using a Lyapunov function, global stability of the between-host system was obtained. Our main result was the derivation of the [Formula: see text] as an increasing function of [Formula: see text]. Numerical analyses reveal that a Michaelis-Menten form based on the virus is more likely to recapitulate the behavior between the scales than a form directly proportional to the virus. Our work contributes basic understandings of the two models and casts light on the potential effects of the coupling function on linking the two scales.

  7. SMD-based numerical stochastic perturbation theory

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Lüscher, Martin

    2017-05-01

    The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.

  8. On the Buckling of Imperfect Anisotropic Shells with Elastic Edge Supports Under Combined Loading Part I:. Pt. 1; Theory and Numerical Analysis

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hol, J. M. A. M.; deVries, J.

    1998-01-01

    A rigorous solution is presented for the case of stiffened anisotropic cylindrical shells with general imperfections under combined loading, where the edge supports are provided by symmetrical or unsymmetrical elastic rings. The circumferential dependence is eliminated by a truncated Fourier series. The resulting nonlinear 2-point boundary value problem is solved numerically via the "Parallel Shooting Method". The changing deformation patterns resulting from the different degrees of interaction between the given initial imperfections and the specified end rings are displayed. Recommendations are made as to the minimum ring stiffnesses required for optimal load carrying configurations.

  9. No Future in the Past? The role of initial topography on landform evolution model predictions

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Coulthard, T. J.; Lowry, J.

    2014-12-01

    Our understanding of earth surface processes is based on long-term empirical understandings, short-term field measurements as well as numerical models. In particular, numerical landscape evolution models (LEMs) have been developed which have the capability to capture a range of both surface (erosion and deposition), tectonics, as well as near surface or critical zone processes (i.e. pedogenesis). These models have a range of applications for understanding both surface and whole of landscape dynamics through to more applied situations such as degraded site rehabilitation. LEMs are now at the stage of development where if calibrated, can provide some level of reliability. However, these models are largely calibrated based on parameters determined from present surface conditions which are the product of much longer-term geology-soil-climate-vegetation interactions. Here, we assess the effect of the initial landscape dimensions and associated error as well as parameterisation for a potential post-mining landform design. The results demonstrate that subtle surface changes in the initial DEM as well as parameterisation can have a large impact on landscape behaviour, erosion depth and sediment discharge. For example, the predicted sediment output from LEM's is shown to be highly variable even with very subtle changes in initial surface conditions. This has two important implications in that decadal time scale field data is needed to (a) better parameterise models and (b) evaluate their predictions. We question how a LEM using parameters derived from field plots can firstly be employed to examine long-term landscape evolution. Secondly, the potential range of outcomes is examined based on estimated temporal parameter change and thirdly, the need for more detailed and rigorous field data for calibration and validation of these models is discussed.

  10. Modeling and Numerical Challenges in Eulerian-Lagrangian Computations of Shock-driven Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Diggs, Angela; Balachandar, Sivaramakrishnan

    2015-06-01

    The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.

  11. Machine learning in the string landscape

    NASA Astrophysics Data System (ADS)

    Carifio, Jonathan; Halverson, James; Krioukov, Dmitri; Nelson, Brent D.

    2017-09-01

    We utilize machine learning to study the string landscape. Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. A decision tree accurately predicts the number of weak Fano toric threefolds arising from reflexive polytopes, each of which determines a smooth F-theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of 4/3× 2.96× {10}^{755} F-theory compactifications. Logistic regression generates a new conjecture for when E 6 arises in the large ensemble of F-theory compactifications, which is then rigorously proven. This result may be relevant for the appearance of visible sectors in the ensemble. Through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.

  12. Parameter-free driven Liouville-von Neumann approach for time-dependent electronic transport simulations in open quantum systems

    DOE PAGES

    Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei; ...

    2017-03-02

    A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less

  13. Symmetry Breaking and Restoration in the Ginzburg-Landau Model of Nematic Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Clerc, Marcel G.; Kowalczyk, Michał; Smyrnelis, Panayotis

    2018-06-01

    In this paper we study qualitative properties of global minimizers of the Ginzburg-Landau energy which describes light-matter interaction in the theory of nematic liquid crystals near the Fréedericksz transition. This model depends on two parameters: ɛ >0 which is small and represents the coherence scale of the system and a≥0 which represents the intensity of the applied laser light. In particular, we are interested in the phenomenon of symmetry breaking as a and ɛ vary. We show that when a=0 the global minimizer is radially symmetric and unique and that its symmetry is instantly broken as a>0 and then restored for sufficiently large values of a. Symmetry breaking is associated with the presence of a new type of topological defect which we named the shadow vortex. The symmetry breaking scenario is a rigorous confirmation of experimental and numerical results obtained earlier in Barboza et al. (Phys Rev E 93(5):050201, 2016).

  14. Experiment for validation of fluid-structure interaction models and algorithms.

    PubMed

    Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D

    2017-09-01

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion. Copyright © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  15. Parameter-free driven Liouville-von Neumann approach for time-dependent electronic transport simulations in open quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei

    A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less

  16. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  17. A selection criterion for patterns in reaction–diffusion systems

    PubMed Central

    2014-01-01

    Background Alan Turing’s work in Morphogenesis has received wide attention during the past 60 years. The central idea behind his theory is that two chemically interacting diffusible substances are able to generate stable spatial patterns, provided certain conditions are met. Ever since, extensive work on several kinds of pattern-generating reaction diffusion systems has been done. Nevertheless, prediction of specific patterns is far from being straightforward, and a great deal of interest in deciphering how to generate specific patterns under controlled conditions prevails. Results Techniques allowing one to predict what kind of spatial structure will emerge from reaction–diffusion systems remain unknown. In response to this need, we consider a generalized reaction diffusion system on a planar domain and provide an analytic criterion to determine whether spots or stripes will be formed. Our criterion is motivated by the existence of an associated energy function that allows bringing in the intuition provided by phase transitions phenomena. Conclusions Our criterion is proved rigorously in some situations, generalizing well-known results for the scalar equation where the pattern selection process can be understood in terms of a potential. In more complex settings it is investigated numerically. Our work constitutes a first step towards rigorous pattern prediction in arbitrary geometries/conditions. Advances in this direction are highly applicable to the efficient design of Biotechnology and Developmental Biology experiments, as well as in simplifying the analysis of morphogenetic models. PMID:24476200

  18. Analytical calculation on the determination of steep side wall angles from far field measurements

    NASA Astrophysics Data System (ADS)

    Cisotto, Luca; Pereira, Silvania F.; Urbach, H. Paul

    2018-06-01

    In the semiconductor industry, the performance and capabilities of the lithographic process are evaluated by measuring specific structures. These structures are often gratings of which the shape is described by a few parameters such as period, middle critical dimension, height, and side wall angle (SWA). Upon direct measurement or retrieval of these parameters, the determination of the SWA suffers from considerable inaccuracies. Although the scattering effects that steep SWAs have on the illumination can be obtained with rigorous numerical simulations, analytical models constitute a very useful tool to get insights into the problem we are treating. In this paper, we develop an approach based on analytical calculations to describe the scattering of a cliff and a ridge with steep SWAs. We also propose a detection system to determine the SWAs of the structures.

  19. The sympathy of two pendulum clocks: beyond Huygens’ observations

    PubMed Central

    Peña Ramirez, Jonatan; Olvera, Luis Alberto; Nijmeijer, Henk; Alvarez, Joaquin

    2016-01-01

    This paper introduces a modern version of the classical Huygens’ experiment on synchronization of pendulum clocks. The version presented here consists of two monumental pendulum clocks—ad hoc designed and fabricated—which are coupled through a wooden structure. It is demonstrated that the coupled clocks exhibit ‘sympathetic’ motion, i.e. the pendula of the clocks oscillate in consonance and in the same direction. Interestingly, when the clocks are synchronized, the common oscillation frequency decreases, i.e. the clocks become slow and inaccurate. In order to rigorously explain these findings, a mathematical model for the coupled clocks is obtained by using well-established physical and mechanical laws and likewise, a theoretical analysis is conducted. Ultimately, the sympathy of two monumental pendulum clocks, interacting via a flexible coupling structure, is experimentally, numerically, and analytically demonstrated. PMID:27020903

  20. Improve processes on healthcare: current issues and future trends.

    PubMed

    Chen, Jason C H; Dolan, Matt; Lin, Binshan

    2004-01-01

    Information Technology (IT) is a critical resource for improving today's business competitiveness. However, many healthcare providers do not proactively manage or improve the efficiency and effectiveness of their services with IT. Survival in a competitive business environment demands continuous improvements in quality and service, while rigorously maintaining core values. Electronic commerce continues its development, gaining ground as the preferred means of business transactions. Embracing e-healthcare and treating IT as a strategic tool to improve patient safety and the quality of care enables healthcare professionals to benefit from technology formerly used only for management purposes. Numerous improvement initiatives, introduced by both the federal government and the private sector, seek to better the status quo in IT. This paper examines the current IT climate using an enhanced "Built to Last" model, and comments on future IT strategies within the healthcare industry.

  1. Robust adaptive cruise control of high speed trains.

    PubMed

    Faieghi, Mohammadreza; Jalali, Aliakbar; Mashhadi, Seyed Kamal-e-ddin Mousavi

    2014-03-01

    The cruise control problem of high speed trains in the presence of unknown parameters and external disturbances is considered. In particular a Lyapunov-based robust adaptive controller is presented to achieve asymptotic tracking and disturbance rejection. The system under consideration is nonlinear, MIMO and non-minimum phase. To deal with the limitations arising from the unstable zero-dynamics we do an output redefinition such that the zero-dynamics with respect to new outputs becomes stable. Rigorous stability analyses are presented which establish the boundedness of all the internal states and simultaneously asymptotic stability of the tracking error dynamics. The results are presented for two common configurations of high speed trains, i.e. the DD and PPD designs, based on the multi-body model and are verified by several numerical simulations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Generation of parabolic similaritons in tapered silicon photonic wires: comparison of pulse dynamics at telecom and mid-infrared wavelengths.

    PubMed

    Lavdas, Spyros; Driscoll, Jeffrey B; Jiang, Hongyi; Grote, Richard R; Osgood, Richard M; Panoiu, Nicolae C

    2013-10-01

    We study the generation of parabolic self-similar optical pulses in tapered Si photonic nanowires (Si-PhNWs) at both telecom (λ=1.55 μm) and mid-infrared (λ=2.2 μm) wavelengths. Our computational study is based on a rigorous theoretical model, which fully describes the influence of linear and nonlinear optical effects on pulse propagation in Si-PhNWs with arbitrarily varying width. Numerical simulations demonstrate that, in the normal dispersion regime, optical pulses evolve naturally into parabolic pulses upon propagation in millimeter-long tapered Si-PhNWs, with the efficiency of this pulse-reshaping process being strongly dependent on the spectral and pulse parameter regime in which the device operates, as well as the particular shape of the Si-PhNWs.

  3. An analysis of the coexistence of two host species with a shared pathogen.

    PubMed

    Chen, Zhi-Min; Price, W G

    2008-06-01

    Population dynamics of two-host species under direct transmission of an infectious disease or a pathogen is studied based on the Holt-Pickering mathematical model, which accounts for the influence of the pathogen on the population of the two-host species. Through rigorous analysis and a numerical scheme of study, circumstances are specified under which the shared pathogen leads to the coexistence of the two-host species in either a persistent or periodic form. This study shows the importance of intrinsic growth rates or the differences between birth rates and death rates of the two host susceptible in controlling these circumstances. It is also demonstrated that the periodicity may arise when the positive intrinsic growth rates are very small, but the periodicity is very weak which may not be observed in an empirical investigation.

  4. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    NASA Astrophysics Data System (ADS)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  5. Characterization of the mechanical properties of a new grade of ultra high molecular weight polyethylene and modeling with the viscoplasticity based on overstress.

    PubMed

    Khan, Fazeel; Yeakle, Colin; Gomaa, Said

    2012-02-01

    Enhancements to the service life and performance of orthopedic implants used in total knee and hip replacement procedures can be achieved through optimization of design and the development of superior biocompatible polymeric materials. The introduction of a new or modified polymer must, naturally, be preceded by a rigorous testing program. This paper presents the assessment of the mechanical properties of a new filled grade of ultra high molecular weight polyethylene (UHMWPE) designated AOX(TM) and developed by DePuy Orthopaedics Inc. The deformation behavior was investigated through a series of tensile and compressive tests including strain rate sensitivity, creep, relaxation, and recovery. The polymer was found to exhibit rate-reversal behavior for certain loading histories: strain rate during creep with a compressive stress can be negative, positive, or change between the two during a test. Analogous behavior occurs during relaxation as well. This behavior lies beyond the realm of most numerical models used to computationally investigate and improve part geometry through finite element analysis of components. To address this shortcoming, the viscoplasticity theory based on overstress (VBO) has been suitably modified to capture these trends. VBO is a state variable based model in a differential formulation. Numerical simulation and prediction of all of the aforementioned tests, including good reproduction of the rate reversal behavior, is presented in this study. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    PubMed

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  7. Experiments and Modeling of G-Jitter Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  8. Experimental characterization of the constitutive materials of MgB2 multi-filamentary wires for the development of 3D numerical models

    NASA Astrophysics Data System (ADS)

    Escamez, Guillaume; Sirois, Frédéric; Tousignant, Maxime; Badel, Arnaud; Granger, Capucine; Tixador, Pascal; Bruzek, Christian-Éric

    2017-03-01

    Today MgB2 superconducting wires can be manufactured in long lengths at low cost, which makes this material a good candidate for large scale applications. However, because of its relatively low critical temperature (less than 40 K), it is necessary to operate MgB2 devices in a liquid or gaseous helium environment. In this context, losses in the cryogenic environment must be rigorously minimized, otherwise the use of a superconductor is not worthy. An accurate estimation of the losses at the design stage is therefore mandatory in order to allow determining the device architecture that minimizes the losses. In this paper, we present a complete a 3D finite element model of a 36-filament MgB2 wire based on the architecture of the Italian manufacturer Colombus. In order for the model to be as accurate as possible, we made a substantial effort to characterize all constitutive materials of the wire, namely the E-J characteristics of the MgB2 filaments and the electric and magnetic properties (B-H curves) of nickel and monel, which are the two major non-superconducting components of the wire. All properties were characterized as a function of temperature and magnetic field. Limitations of the characterization and of the model are discussed, in particular the difficulty to extract the maximum relative permeability of nickel and monel from the experimental data, as well as the lack of a thin conductive layer model in the 3D finite element method, which prevents us from taking into account the resistive barriers around the MgB2 filaments in the matrix. Two examples of numerical simulations are provided to illustrate the capabilities of the model in its current state.

  9. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  10. FROM THE HISTORY OF PHYSICS: The physics of a thermonuclear explosion of a normal-density liquefied deuterium sphere (On the impossibility of a spherically symmetric thermonuclear explosion in liquid deuterium at normal density)

    NASA Astrophysics Data System (ADS)

    Marchuk, Gurii I.; Imshennik, Vladimir S.; Basko, Mikhail M.

    2009-03-01

    The hydrodynamic problem of a thermonuclear explosion in a sphere of normal-density liquid deuterium was solved (Institute for Physics and Power Engineering, Obninsk) in 1952-1954 in the framework of the Soviet Atomic Project. The principal result was that the explosion shockwave in deuterium strongly decayed because of radiation energy loss and nonlocal energy release by fast neutrons. At that time, this negative result implied in essence that the straightforward approach to creating a thermonuclear weapon was in fact a blind alley. This paper describes a numerical solution to the stated problem, obtained with the modern DEIRA code developed for numerical modeling of inertially confined fusion. Detailed numerical calculations have confirmed the above 'historic' result and shed additional light on the physical causes of the detonation wave decay. The most pernicious factor is the radiation energy loss due to the combined effect of bremsstrahlung and the inverse Compton scattering of the emitted photons on the hot electrons. The impact of energy transfer by fast neutrons — which was already quite adequately accounted for in the above-cited historical work — is less significant. We present a more rigorous (compared to that of the 1950s) study of the role of inverse Compton scattering for which, in particular, an independent analytic estimate is obtained.

  11. Solutions to a reduced Poisson–Nernst–Planck system and determination of reaction rates

    PubMed Central

    Li, Bo; Lu, Benzhuo; Wang, Zhongming; McCammon, J. Andrew

    2010-01-01

    We study a reduced Poisson–Nernst–Planck (PNP) system for a charged spherical solute immersed in a solvent with multiple ionic or molecular species that are electrostatically neutralized in the far field. Some of these species are assumed to be in equilibrium. The concentrations of such species are described by the Boltzmann distributions that are further linearized. Others are assumed to be reactive, meaning that their concentrations vanish when in contact with the charged solute. We present both semi-analytical solutions and numerical iterative solutions to the underlying reduced PNP system, and calculate the reaction rate for the reactive species. We give a rigorous analysis on the convergence of our simple iteration algorithm. Our numerical results show the strong dependence of the reaction rates of the reactive species on the magnitude of its far field concentration as well as on the ionic strength of all the chemical species. We also find non-monotonicity of electrostatic potential in certain parameter regimes. The results for the reactive system and those for the non-reactive system are compared to show the significant differences between the two cases. Our approach provides a means of solving a PNP system which in general does not have a closed-form solution even with a special geometrical symmetry. Our findings can also be used to test other numerical methods in large-scale computational modeling of electro-diffusion in biological systems. PMID:20228879

  12. Turing patterns in parabolic systems of conservation laws and numerically observed stability of periodic waves

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Jung, Soyeun; Zumbrun, Kevin

    2018-03-01

    Turing patterns on unbounded domains have been widely studied in systems of reaction-diffusion equations. However, up to now, they have not been studied for systems of conservation laws. Here, we (i) derive conditions for Turing instability in conservation laws and (ii) use these conditions to find families of periodic solutions bifurcating from uniform states, numerically continuing these families into the large-amplitude regime. For the examples studied, numerical stability analysis suggests that stable periodic waves can emerge either from supercritical Turing bifurcations or, via secondary bifurcation as amplitude is increased, from subcritical Turing bifurcations. This answers in the affirmative a question of Oh-Zumbrun whether stable periodic solutions of conservation laws can occur. Determination of a full small-amplitude stability diagram - specifically, determination of rigorous Eckhaus-type stability conditions - remains an interesting open problem.

  13. Numerical simulation of a shear-thinning fluid through packed spheres

    NASA Astrophysics Data System (ADS)

    Liu, Hai Long; Moon, Jong Sin; Hwang, Wook Ryol

    2012-12-01

    Flow behaviors of a non-Newtonian fluid in spherical microstructures have been studied by a direct numerical simulation. A shear-thinning (power-law) fluid through both regular and randomly packed spheres has been numerically investigated in a representative unit cell with the tri-periodic boundary condition, employing a rigorous three-dimensional finite-element scheme combined with fictitious-domain mortar-element methods. The present scheme has been validated for the classical spherical packing problems with literatures. The flow mobility of regular packing structures, including simple cubic (SC), body-centered cubic (BCC), face-centered cubic (FCC), as well as randomly packed spheres, has been investigated quantitatively by considering the amount of shear-thinning, the pressure gradient and the porosity as parameters. Furthermore, the mechanism leading to the main flow path in a highly shear-thinning fluid through randomly packed spheres has been discussed.

  14. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  15. Experimental and Numerical Analysis of the Effects of Curing Time on Tensile Mechanical Properties of Thin Spray-on Liners

    NASA Astrophysics Data System (ADS)

    Guner, D.; Ozturk, H.

    2016-08-01

    The effects of curing time on tensile elastic material properties of thin spray-on liners (TSLs) were investigated in this study. Two different TSL products supplied by two manufacturers were tested comparatively. The "dogbone" tensile test samples that were prepared in laboratory conditions with different curing times (1, 7, 14, 21, and 28 days) were tested based on ASTM standards. It was concluded that longer curing times improves the tensile strength and the Young's Modulus of the TSLs but decreases their elongation at break. Moreover, as an additional conclusion of the testing procedure, it was observed that during the tensile tests, the common malpractice of measuring sample displacement from the grips of the loading machine with a linear variable displacement transducer versus the sample's gauge length had a major impact on modulus and deformation determination of TSLs. To our knowledge, true stress-strain curves were generated for the first time in TSL literature within this study. Numerical analyses of the laboratory tests were also conducted using Particle Flow Code in 2 Dimensions (PFC2D) in an attempt to guide TSL researchers throughout the rigorous PFC simulation process to model support behaviour of TSLs. A scaling coefficient between macro- and micro-properties of PFC was calculated which will help future TSL PFC modellers mimic their TSL behaviours for various tensile loading support scenarios.

  16. Economic Assessment of Correlated Energy-Water Impacts using Computable General Equilibrium Modeling

    NASA Astrophysics Data System (ADS)

    Qiu, F.; Andrew, S.; Wang, J.; Yan, E.; Zhou, Z.; Veselka, T.

    2016-12-01

    Many studies on energy and water are rightfully interested in the interaction of water and energy, and their projected dependence into the future. Water is indeed an essential input to the power sector currently, and energy is required to pump water for end use in either household consumption or in industrial uses. However, each presented study either qualitatively discusses the issues, particularly about how better understanding the interconnectedness of the system is paramount in getting better policy recommendations, or considers a partial equilibrium framework where water use and energy use changes are considered explicitly without thought to other repercussions throughout the regional/national/international economic landscapes. While many studies are beginning to ask the right questions, the lack of numerical rigor raises questions of concern in conclusions discerned. Most use life cycle analysis as a method for providing numerical results, though this lacks the flexibility that economics can provide. In this study, we will perform economic analysis using computable general equilibrium models with energy-water interdependencies captured as an important factor. We atempt to answer important and interesting questions in the studies: how can we characterize the economic choice of energy technology adoptions and their implications on water use in the domestic economy. Moreover, given predictions of reductions in rain fall in the near future, how does this impact the water supply in the midst of this energy-water trade-off?

  17. Geometrically Nonlinear Static Analysis of 3D Trusses Using the Arc-Length Method

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2006-01-01

    Rigorous analysis of geometrically nonlinear structures demands creating mathematical models that accurately include loading and support conditions and, more importantly, model the stiffness and response of the structure. Nonlinear geometric structures often contain critical points with snap-through behavior during the response to large loads. Studying the post buckling behavior during a portion of a structure's unstable load history may be necessary. Primary structures made from ductile materials will stretch enough prior to failure for loads to redistribute producing sudden and often catastrophic collapses that are difficult to predict. The responses and redistribution of the internal loads during collapses and possible sharp snap-back of structures have frequently caused numerical difficulties in analysis procedures. The presence of critical stability points and unstable equilibrium paths are major difficulties that numerical solutions must pass to fully capture the nonlinear response. Some hurdles still exist in finding nonlinear responses of structures under large geometric changes. Predicting snap-through and snap-back of certain structures has been difficult and time consuming. Also difficult is finding how much load a structure may still carry safely. Highly geometrically nonlinear responses of structures exhibiting complex snap-back behavior are presented and analyzed with a finite element approach. The arc-length method will be reviewed and shown to predict the proper response and follow the nonlinear equilibrium path through limit points.

  18. Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold

    2016-01-01

    Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.

  19. Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation

    NASA Astrophysics Data System (ADS)

    Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua

    2018-06-01

    Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.

  20. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  1. Review of rigorous coupled-wave analysis and of homogeneous effective medium approximations for high spatial-frequency surface-relief gratings

    NASA Technical Reports Server (NTRS)

    Glytsis, Elias N.; Brundrett, David L.; Gaylord, Thomas K.

    1993-01-01

    A review of the rigorous coupled-wave analysis as applied to the diffraction of electro-magnetic waves by gratings is presented. The analysis is valid for any polarization, angle of incidence, and conical diffraction. Cascaded and/or multiplexed gratings as well as material anisotropy can be incorporated under the same formalism. Small period rectangular groove gratings can also be modeled using approximately equivalent uniaxial homogeneous layers (effective media). The ordinary and extraordinary refractive indices of these layers depend on the gratings filling factor, the refractive indices of the substrate and superstrate, and the ratio of the freespace wavelength to grating period. Comparisons of the homogeneous effective medium approximations with the rigorous coupled-wave analysis are presented. Antireflection designs (single-layer or multilayer) using the effective medium models are presented and compared. These ultra-short period antireflection gratings can also be used to produce soft x-rays. Comparisons of the rigorous coupled-wave analysis with experimental results on soft x-ray generation by gratings are also included.

  2. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less

  3. Stem cell stratagems in alternative medicine.

    PubMed

    Sipp, Douglas

    2011-05-01

    Stem cell research has attracted an extraordinary amount of attention and expectation due to its potential for applications in the treatment of numerous medical conditions. These exciting clinical prospects have generated widespread support from both the public and private sectors, and numerous preclinical studies and rigorous clinical trials have already been initiated. Recent years, however, have also seen alarming growth in the number and variety of claims of clinical uses of notional 'stem cells' that have not been adequately tested for safety and/or efficacy. In this article, I will survey the contours of the stem cell industry as practiced by alternative medicine providers, and highlight points of commonality in their strategies for marketing.

  4. Interpretation of high-dimensional numerical results for the Anderson transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suslov, I. M., E-mail: suslov@kapitza.ras.ru

    The existence of the upper critical dimension d{sub c2} = 4 for the Anderson transition is a rigorous consequence of the Bogoliubov theorem on renormalizability of φ{sup 4} theory. For d ≥ 4 dimensions, one-parameter scaling does not hold and all existent numerical data should be reinterpreted. These data are exhausted by the results for d = 4, 5 from scaling in quasi-one-dimensional systems and the results for d = 4, 5, 6 from level statistics. All these data are compatible with the theoretical scaling dependences obtained from Vollhardt and Wolfle’s self-consistent theory of localization. The widespread viewpoint that d{submore » c2} = ∞ is critically discussed.« less

  5. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  6. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2008-05-01

    The security of a standard bidirectional “plug-and-play” quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we solve this question directly by presenting the quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard Bennett-Brassard 1984 protocol, weak+vacuum decoy state protocol, and one-decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source.

  7. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    PubMed

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  8. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  9. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  10. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  11. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  12. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    NASA Astrophysics Data System (ADS)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  13. A general computation model based on inverse analysis principle used for rheological analysis of W/O rapeseed and soybean oil emulsions

    NASA Astrophysics Data System (ADS)

    Vintila, Iuliana; Gavrus, Adinel

    2017-10-01

    The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).

  14. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.

    PubMed

    Pecevski, Dejan; Maass, Wolfgang

    2016-01-01

    Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.

  15. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123

    PubMed Central

    Pecevski, Dejan

    2016-01-01

    Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214

  16. Reinventing the High School Government Course: Rigor, Simulations, and Learning from Text

    ERIC Educational Resources Information Center

    Parker, Walter C.; Lo, Jane C.

    2016-01-01

    The high school government course is arguably the main site of formal civic education in the country today. This article presents the curriculum that resulted from a multiyear study aimed at improving the course. The pedagogic model, called "Knowledge in Action," centers on a rigorous form of project-based learning where the projects are…

  17. All Rigor and No Play Is No Way to Improve Learning

    ERIC Educational Resources Information Center

    Wohlwend, Karen; Peppler, Kylie

    2015-01-01

    The authors propose and discuss their Playshop curricular model, which they developed with teachers. Their studies suggest a playful approach supports even more rigor than the Common Core State Standards require for preschool and early grade children. Children keep their attention longer when learning comes in the form of something they can play…

  18. Scientific rigor through videogames.

    PubMed

    Treuille, Adrien; Das, Rhiju

    2014-11-01

    Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  20. Dynamics and Energetics of Deformable Evaporating Droplets at Intermediate Reynolds Numbers.

    NASA Astrophysics Data System (ADS)

    Haywood, Ross Jeffrey

    The behaviour of vaporizing droplets, representative of droplets present in hydrocarbon fuel sprays, has been investigated. A finite volume numerical model using a non-orthogonal, adaptive grid has been developed to examine both steady deformed and transient deforming droplet behaviour. Computations are made of the shapes of, and the velocity, pressure, temperature and concentration fields around and within n-heptane droplets evaporating in high temperature air environments at intermediate Reynolds and Weber numbers (10 <= Re <= 100, We <= 10). The numerical model has been rigorously tested by comparison with existing theoretical and numerical solutions and experimental data for problems of intermediate Reynolds number flows over spheroids, inviscid deforming droplets, viscous oscillating droplets, and transient deforming liquid droplets subjected to electrostatic fields. Computations show steady deformed droplets assuming oblate shapes with major axes perpendicular to the mean flow direction. When based on volume equivalent diameters, existing quasi-steady correlations of Nusselt and Sherwood numbers (Renksizbulut and Yuen (1983), Haywood et al. (1989), and Renksizbulut et al. (1991)) for spherical droplets are in good agreement with the numerical results. Providing they are based on actual frontal area, the computed drag coefficients are also reasonably well predicted by the existing quasi-steady drag correlation (Haywood et al. (1989), Renksizbulut and Yuen (1983)). A new correlation is developed for the total drag coefficient of quasi-steady deformed vaporizing droplets. The computed transient histories of droplets injected with an initial Reynolds number of 100 into 1000 K air at 1 and 10 atmospheres ambient pressure show strongly damped initial oscillations at frequencies within 25 percent of the theoretical natural frequency of Lamb (1932). Gas phase shear induced circulation within the droplets is responsible for the observed strong damping and promotes the formation of prolate shapes. The computed rates of heat and mass transfer of transient deforming drops are well predicted by the quasi-steady correlations indicated above.

  1. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2017-12-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  2. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  3. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics.

    PubMed

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, Enzo; Laufs, Helmut; Lacasa, Lucas

    2018-02-23

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.

  4. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2018-06-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  5. Predictability in cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.

  6. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  7. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  8. BDNF and its pro-peptide are stored in presynaptic dense core vesicles in brain neurons

    PubMed Central

    Dieni, Sandra; Matsumoto, Tomoya; Dekkers, Martijn; Rauskolb, Stefanie; Ionescu, Mihai S.; Deogracias, Ruben; Gundelfinger, Eckart D.; Kojima, Masami; Nestel, Sigrun; Frotscher, Michael

    2012-01-01

    Although brain-derived neurotrophic factor (BDNF) regulates numerous and complex biological processes including memory retention, its extremely low levels in the mature central nervous system have greatly complicated attempts to reliably localize it. Using rigorous specificity controls, we found that antibodies reacting either with BDNF or its pro-peptide both stained large dense core vesicles in excitatory presynaptic terminals of the adult mouse hippocampus. Both moieties were ∼10-fold more abundant than pro-BDNF. The lack of postsynaptic localization was confirmed in Bassoon mutants, a seizure-prone mouse line exhibiting markedly elevated levels of BDNF. These findings challenge previous conclusions based on work with cultured neurons, which suggested activity-dependent dendritic synthesis and release of BDNF. They instead provide an ultrastructural basis for an anterograde mode of action of BDNF, contrasting with the long-established retrograde model derived from experiments with nerve growth factor in the peripheral nervous system. PMID:22412021

  9. Surface-plasmon mediated total absorption of light into silicon.

    PubMed

    Yoon, Jae Woong; Park, Woo Jae; Lee, Kyu Jin; Song, Seok Ho; Magnusson, Robert

    2011-10-10

    We report surface-plasmon mediated total absorption of light into a silicon substrate. For an Au grating on Si, we experimentally show that a surface-plasmon polariton (SPP) excited on the air/Au interface leads to total absorption with a rate nearly 10 times larger than the ohmic damping rate of collectively oscillating free electrons in the Au film. Rigorous numerical simulations show that the SPP resonantly enhances forward diffraction of light to multiple orders of lossy waves in the Si substrate with reflection and ohmic absorption in the Au film being negligible. The measured reflection and phase spectra reveal a quantitative relation between the peak absorbance and the associated reflection phase change, implying a resonant interference contribution to this effect. An analytic model of a dissipative quasi-bound resonator provides a general formula for the resonant absorbance-phase relation in excellent agreement with the experimental results.

  10. Efficient steady-state solver for hierarchical quantum master equations

    NASA Astrophysics Data System (ADS)

    Zhang, Hou-Dao; Qiao, Qin; Xu, Rui-Xue; Zheng, Xiao; Yan, YiJing

    2017-07-01

    Steady states play pivotal roles in many equilibrium and non-equilibrium open system studies. Their accurate evaluations call for exact theories with rigorous treatment of system-bath interactions. Therein, the hierarchical equations-of-motion (HEOM) formalism is a nonperturbative and non-Markovian quantum dissipation theory, which can faithfully describe the dissipative dynamics and nonlinear response of open systems. Nevertheless, solving the steady states of open quantum systems via HEOM is often a challenging task, due to the vast number of dynamical quantities involved. In this work, we propose a self-consistent iteration approach that quickly solves the HEOM steady states. We demonstrate its high efficiency with accurate and fast evaluations of low-temperature thermal equilibrium of a model Fenna-Matthews-Olson pigment-protein complex. Numerically exact evaluation of thermal equilibrium Rényi entropies and stationary emission line shapes is presented with detailed discussion.

  11. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  12. Local and global approaches to the problem of Poincaré recurrences. Applications in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Anishchenko, V. S.; Boev, Ya. I.; Semenova, N. I.; Strelkova, G. I.

    2015-07-01

    We review rigorous and numerical results on the statistics of Poincaré recurrences which are related to the modern development of the Poincaré recurrence problem. We analyze and describe the rigorous results which are achieved both in the classical (local) approach and in the recently developed global approach. These results are illustrated by numerical simulation data for simple chaotic and ergodic systems. It is shown that the basic theoretical laws can be applied to noisy systems if the probability measure is ergodic and stationary. Poincaré recurrences are studied numerically in nonautonomous systems. Statistical characteristics of recurrences are analyzed in the framework of the global approach for the cases of positive and zero topological entropy. We show that for the positive entropy, there is a relationship between the Afraimovich-Pesin dimension, Lyapunov exponents and the Kolmogorov-Sinai entropy either without and in the presence of external noise. The case of zero topological entropy is exemplified by numerical results for the Poincare recurrence statistics in the circle map. We show and prove that the dependence of minimal recurrence times on the return region size demonstrates universal properties for the golden and the silver ratio. The behavior of Poincaré recurrences is analyzed at the critical point of Feigenbaum attractor birth. We explore Poincaré recurrences for an ergodic set which is generated in the stroboscopic section of a nonautonomous oscillator and is similar to a circle shift. Based on the obtained results we show how the Poincaré recurrence statistics can be applied for solving a number of nonlinear dynamics issues. We propose and illustrate alternative methods for diagnosing effects of external and mutual synchronization of chaotic systems in the context of the local and global approaches. The properties of the recurrence time probability density can be used to detect the stochastic resonance phenomenon. We also discuss how the fractal dimension of chaotic attractors can be estimated using the Poincaré recurrence statistics.

  13. Arnold diffusion in the planar elliptic restricted three-body problem: mechanism and numerical verification

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Gidea, Marian; de la Llave, Rafael

    2017-01-01

    We present a diffusion mechanism for time-dependent perturbations of autonomous Hamiltonian systems introduced in Gidea (2014 arXiv:1405.0866). This mechanism is based on shadowing of pseudo-orbits generated by two dynamics: an ‘outer dynamics’, given by homoclinic trajectories to a normally hyperbolic invariant manifold, and an ‘inner dynamics’, given by the restriction to that manifold. On the inner dynamics the only assumption is that it preserves area. Unlike other approaches, Gidea (2014 arXiv:1405.0866) does not rely on the KAM theory and/or Aubry-Mather theory to establish the existence of diffusion. Moreover, it does not require to check twist conditions or non-degeneracy conditions near resonances. The conditions are explicit and can be checked by finite precision calculations in concrete systems (roughly, they amount to checking that Melnikov-type integrals do not vanish and that some manifolds are transversal). As an application, we study the planar elliptic restricted three-body problem. We present a rigorous theorem that shows that if some concrete calculations yield a non zero value, then for any sufficiently small, positive value of the eccentricity of the orbits of the main bodies, there are orbits of the infinitesimal body that exhibit a change of energy that is bigger than some fixed number, which is independent of the eccentricity. We verify numerically these calculations for values of the masses close to that of the Jupiter/Sun system. The numerical calculations are not completely rigorous, because we ignore issues of round-off error and do not estimate the truncations, but they are not delicate at all by the standard of numerical analysis. (Standard tests indicate that we get 7 or 8 figures of accuracy where 1 would be enough.) The code of these verifications is available. We hope that some full computer assisted proofs will be obtained in the near future since there are packages (CAPD) designed for problems of this type.

  14. Experimental and theoretical study of light scattering by individual mature red blood cells by use of scanning flow cytometry and a discrete dipole approximation.

    PubMed

    Yurkin, Maxim A; Semyanov, Konstantin A; Tarasov, Peter A; Chernyshev, Andrei V; Hoekstra, Alfons G; Maltsev, Valeri P

    2005-09-01

    Elastic light scattering by mature red blood cells (RBCs) was theoretically and experimentally analyzed by use of the discrete dipole approximation (DDA) and scanning flow cytometry (SFC), respectively. SFC permits measurement of the angular dependence of the light-scattering intensity (indicatrix) of single particles. A mature RBC is modeled as a biconcave disk in DDA simulations of light scattering. We have studied the effect of RBC orientation related to the direction of the light incident upon the indicatrix. Numerical calculations of indicatrices for several axis ratios and volumes of RBC have been carried out. Comparison of the simulated indicatrices and indicatrices measured by SFC showed good agreement, validating the biconcave disk model for a mature RBC. We simulated the light-scattering output signals from the SFC with the DDA for RBCs modeled as a disk-sphere and as an oblate spheroid. The biconcave disk, the disk-sphere, and the oblate spheroid models have been compared for two orientations, i.e., face-on and rim-on incidence, relative to the direction of the incident beam. Only the oblate spheroid model for rim-on incidence gives results similar to those of the rigorous biconcave disk model.

  15. Simulations and model of the nonlinear Richtmyer–Meshkov instability

    DOE PAGES

    Dimonte, Guy; Ramaprabhu, P.

    2010-01-21

    The nonlinear evolution of the Richtmyer-Meshkov (RM) instability is investigated using numerical simulations with the FLASH code in two-dimensions (2D). The purpose of the simulations is to develop an empiricial nonlinear model of the RM instability that is applicable to inertial confinement fusion (ICF) and ejecta formation, namely, at large Atwood number A and scaled initial amplitude kh o (k ≡ wavenumber) of the perturbation. The FLASH code is first validated with a variety of RM experiments that evolve well into the nonlinear regime. They reveal that bubbles stagnate when they grow by an increment of 2/k and that spikesmore » accelerate for A > 0.5 due to higher harmonics that focus them. These results are then compared with a variety of nonlinear models that are based on potential flow. We find that the models agree with simulations for moderate values of A < 0.9 and kh o< 1, but not for the larger values that characterize ICF and ejecta formation. We thus develop a new nonlinear empirical model that captures the simulation results consistent with potential flow for a broader range of A and kh o. Our hope is that such empirical models concisely capture the RM simulations and inspire more rigorous solutions.« less

  16. Digital morphogenesis via Schelling segregation

    NASA Astrophysics Data System (ADS)

    Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew

    2018-04-01

    Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.

  17. ULF Waves in the Ionospheric Alfven Resonator: Modeling of MICA Observations

    NASA Astrophysics Data System (ADS)

    Streltsov, A. V.; Tulegenov, B.

    2017-12-01

    We present results from a numerical study of physical processes responsible for the generation of small-scale, intense electromagnetic structures in the ultra-low-frequency range frequently observed in the close vicinity of bright discrete auroral arcs. In particular, our research is focused on the role of the ionosphere in generating these structures. A significant body of observations demonstrate that small-scale electromagnetic waves with frequencies below 1 Hz are detected at high latitudes where the large-scale, downward magnetic field-aligned current (FAC) interact with the ionosphere. Some theoretical studies suggest that these waves can be generated by the ionospheric feedback instability (IFI) inside the ionospheric Alfven resonator (IAR). The IAR is the region in the low-altitude magnetosphere bounded by the strong gradient in the Alfven speed at high altitude and the conducting bottom of the ionosphere (ionospheric E-region) at low altitude. To study ULF waves in this region we use a numerical model developed from reduced two fluid MHD equations describing shear Alfven waves in the ionosphere and magnetosphere of the earth. The active ionospheric feedback on structure and amplitude of magnetic FACs that interact with the ionosphere is implemented through the ionospheric boundary conditions that link the parallel current density with the plasma density and the perpendicular electric field in the ionosphere. Our numerical results are compared with the in situ measurements performed by the Magnetosphere-Ionosphere Coupling in the Alfven Resonator (MICA) sounding rocket, launched on February 19, 2012 from Poker Flat Research Range in Alaska to measure fields and particles during a passage through a discreet auroral arc. Parameters of the simulations are chosen to match actual MICA parameters, allowing the comparison in the most precise and rigorous way. Waves generated in the numerical model have frequencies between 0.30 and 0.45 Hz, while MICA measured similar waves in the range from 0.18 to 0.50 Hz. These results prove that the IFI driven inside the IAR by a system of large-scale upward-downward currents is the main mechanism responsible for the generation of small-scale intense ULF waves in the vicinity of discrete auroral arcs.

  18. Mathematical assessment of the role of temperature and rainfall on mosquito population dynamics.

    PubMed

    Abdelrazec, Ahmed; Gumel, Abba B

    2017-05-01

    A new stage-structured model for the population dynamics of the mosquito (a major vector for numerous vector-borne diseases), which takes the form of a deterministic system of non-autonomous nonlinear differential equations, is designed and used to study the effect of variability in temperature and rainfall on mosquito abundance in a community. Two functional forms of eggs oviposition rate, namely the Verhulst-Pearl logistic and Maynard-Smith-Slatkin functions, are used. Rigorous analysis of the autonomous version of the model shows that, for any of the oviposition functions considered, the trivial equilibrium of the model is locally- and globally-asymptotically stable if a certain vectorial threshold quantity is less than unity. Conditions for the existence and global asymptotic stability of the non-trivial equilibrium solutions of the model are also derived. The model is shown to undergo a Hopf bifurcation under certain conditions (and that increased density-dependent competition in larval mortality reduces the likelihood of such bifurcation). The analyses reveal that the Maynard-Smith-Slatkin oviposition function sustains more oscillations than the Verhulst-Pearl logistic function (hence, it is more suited, from ecological viewpoint, for modeling the egg oviposition process). The non-autonomous model is shown to have a globally-asymptotically stable trivial periodic solution, for each of the oviposition functions, when the associated reproduction threshold is less than unity. Furthermore, this model, in the absence of density-dependent mortality rate for larvae, has a unique and globally-asymptotically stable periodic solution under certain conditions. Numerical simulations of the non-autonomous model, using mosquito surveillance and weather data from the Peel region of Ontario, Canada, show a peak mosquito abundance for temperature and rainfall values in the range [Formula: see text]C and [15-35] mm, respectively. These ranges are recorded in the Peel region between July and August (hence, this study suggests that anti-mosquito control effects should be intensified during this period).

  19. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Wakes and differential charging of large bodies in low Earth orbit

    NASA Technical Reports Server (NTRS)

    Parker, L. W.

    1985-01-01

    Highlights of earlier results using the Inside-Out WAKE code on wake structures of LEO spacecraft are reviewed. For conducting bodies of radius large compared with the Debye length, a high Mach number wake develops a negative potential well. Quasineutrality is violated in the very near wake region, and the wake is relatively empty for a distance downstream of about one half of a Mach number of radii. There is also a suggestion of a core of high density along the axis. A comparison of rigorous numerical solutions with in situ wake data from the AE-C satellite suggests that the so called neutral approximation for ions (straight line trajectories, independent of fields) may be a reasonable approximation except near the center of the near wake. This approximation is adopted for very large bodies. Work concerned with the wake point potential of very large nonconducting bodies such as the shuttle orbiter is described. Using a cylindrical model for bodies of this size or larger in LEO (body radius up to 10 to the 5th power Debye lengths), approximate solutions are presented based on the neutral approximation (but with rigorous trajectory calculations for surface current balance). There is a negative potential well if the body is conducting, and no well if the body is nonconducting. In the latter case the wake surface itself becomes highly negative. The wake point potential is governed by the ion drift energy.

  1. EarthLabs Modules: Engaging Students In Extended, Rigorous Investigations Of The Ocean, Climate and Weather

    NASA Astrophysics Data System (ADS)

    Manley, J.; Chegwidden, D.; Mote, A. S.; Ledley, T. S.; Lynds, S. E.; Haddad, N.; Ellins, K.

    2016-02-01

    EarthLabs, envisioned as a national model for high school Earth or Environmental Science lab courses, is adaptable for both undergraduate middle school students. The collection includes ten online modules that combine to feature a global view of our planet as a dynamic, interconnected system, by engaging learners in extended investigations. EarthLabs support state and national guidelines, including the NGSS, for science content. Four modules directly guide students to discover vital aspects of the oceans while five other modules incorporate ocean sciences in order to complete an understanding of Earth's climate system. Students gain a broad perspective on the key role oceans play in fishing industry, droughts, coral reefs, hurricanes, the carbon cycle, as well as life on land and in the seas to drive our changing climate by interacting with scientific research data, manipulating satellite imagery, numerical data, computer visualizations, experiments, and video tutorials. Students explore Earth system processes and build quantitative skills that enable them to objectively evaluate scientific findings for themselves as they move through ordered sequences that guide the learning. As a robust collection, EarthLabs modules engage students in extended, rigorous investigations allowing a deeper understanding of the ocean, climate and weather. This presentation provides an overview of the ten curriculum modules that comprise the EarthLabs collection developed by TERC and found at http://serc.carleton.edu/earthlabs/index.html. Evaluation data on the effectiveness and use in secondary education classrooms will be summarized.

  2. Assessing the Rigor of HS Curriculum in Admissions Decisions: A Functional Method, Plus Practical Advising for Prospective Students and High School Counselors

    ERIC Educational Resources Information Center

    Micceri, Theodore; Brigman, Leellen; Spatig, Robert

    2009-01-01

    An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…

  3. A rigorous test of the accuracy of USGS digital elevation models in forested areas of Oregon and Washington.

    Treesearch

    Ward W. Carson; Stephen E. Reutebuch

    1997-01-01

    A procedure for performing a rigorous test of elevational accuracy of DEMs using independent ground coordinate data digitized photogrammetrically from aerial photography is presented. The accuracy of a sample set of 23 DEMs covering National Forests in Oregon and Washington was evaluated. Accuracy varied considerably between eastern and western parts of Oregon and...

  4. Accelerating Biomedical Discoveries through Rigor and Transparency.

    PubMed

    Hewitt, Judith A; Brown, Liliana L; Murphy, Stephanie J; Grieder, Franziska; Silberberg, Shai D

    2017-07-01

    Difficulties in reproducing published research findings have garnered a lot of press in recent years. As a funder of biomedical research, the National Institutes of Health (NIH) has taken measures to address underlying causes of low reproducibility. Extensive deliberations resulted in a policy, released in 2015, to enhance reproducibility through rigor and transparency. We briefly explain what led to the policy, describe its elements, provide examples and resources for the biomedical research community, and discuss the potential impact of the policy on translatability with a focus on research using animal models. Importantly, while increased attention to rigor and transparency may lead to an increase in the number of laboratory animals used in the near term, it will lead to more efficient and productive use of such resources in the long run. The translational value of animal studies will be improved through more rigorous assessment of experimental variables and data, leading to better assessments of the translational potential of animal models, for the benefit of the research community and society. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  5. Development of the T+M coupled flow–geomechanical simulator to describe fracture propagation and coupled flow–thermal–geomechanical processes in tight/shale gas systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihoon; Moridis, George J.

    2013-10-01

    We developed a hydraulic fracturing simulator by coupling a flow simulator to a geomechanics code, namely T+M simulator. Modeling of the vertical fracture development involves continuous updating of the boundary conditions and of the data connectivity, based on the finite element method for geomechanics. The T+M simulator can model the initial fracture development during the hydraulic fracturing operations, after which the domain description changes from single continuum to double or multiple continua in order to rigorously model both flow and geomechanics for fracture-rock matrix systems. The T+H simulator provides two-way coupling between fluid-heat flow and geomechanics, accounting for thermoporomechanics, treatsmore » nonlinear permeability and geomechanical moduli explicitly, and dynamically tracks changes in the fracture(s) and in the pore volume. We also fully accounts for leak-off in all directions during hydraulic fracturing. We first validate the T+M simulator, matching numerical solutions with the analytical solutions for poromechanical effects, static fractures, and fracture propagations. Then, from numerical simulation of various cases of the planar fracture propagation, shear failure can limit the vertical fracture propagation of tensile failure, because of leak-off into the reservoirs. Slow injection causes more leak-off, compared with fast injection, when the same amount of fluid is injected. Changes in initial total stress and contributions of shear effective stress to tensile failure can also affect formation of the fractured areas, and the geomechanical responses are still well-posed.« less

  6. A 2-D numerical simulation study on longitudinal solute transport and longitudinal dispersion coefficient

    NASA Astrophysics Data System (ADS)

    Zhang, Wei

    2011-07-01

    The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.

  7. On the Far-Zone Electromagnetic Field of a Horizontal Electric Dipole Over an Imperfectly Conducting Half-Space With Extensions to Plasmonics

    NASA Astrophysics Data System (ADS)

    Michalski, Krzysztof A.; Lin, Hung-I.

    2018-01-01

    Second-order asymptotic formulas for the electromagnetic fields of a horizontal electric dipole over an imperfectly conducting half-space are derived using the modified saddle-point method. Application examples are presented for ordinary and plasmonic media, and the accuracy of the new formulation is assessed by comparisons with two alternative state-of-the-art theories and with the rigorous results of numerical integration.

  8. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  9. Bifurcation Analysis Using Rigorous Branch and Bound Methods

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Crespo, Luis G.; Munoz, Cesar A.; Lowenberg, Mark H.

    2014-01-01

    For the study of nonlinear dynamic systems, it is important to locate the equilibria and bifurcations occurring within a specified computational domain. This paper proposes a new approach for solving these problems and compares it to the numerical continuation method. The new approach is based upon branch and bound and utilizes rigorous enclosure techniques to yield outer bounding sets of both the equilibrium and local bifurcation manifolds. These sets, which comprise the union of hyper-rectangles, can be made to be as tight as desired. Sufficient conditions for the existence of equilibrium and bifurcation points taking the form of algebraic inequality constraints in the state-parameter space are used to calculate their enclosures directly. The enclosures for the bifurcation sets can be computed independently of the equilibrium manifold, and are guaranteed to contain all solutions within the computational domain. A further advantage of this method is the ability to compute a near-maximally sized hyper-rectangle of high dimension centered at a fixed parameter-state point whose elements are guaranteed to exclude all bifurcation points. This hyper-rectangle, which requires a global description of the bifurcation manifold within the computational domain, cannot be obtained otherwise. A test case, based on the dynamics of a UAV subject to uncertain center of gravity location, is used to illustrate the efficacy of the method by comparing it with numerical continuation and to evaluate its computational complexity.

  10. Image synthesis for SAR system, calibration and processor design

    NASA Technical Reports Server (NTRS)

    Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.

    1978-01-01

    The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.

  11. Analytical formulation of lunar cratering asymmetries

    NASA Astrophysics Data System (ADS)

    Wang, Nan; Zhou, Ji-Lin

    2016-10-01

    Context. The cratering asymmetry of a bombarded satellite is related to both its orbit and impactors. The inner solar system impactor populations, that is, the main-belt asteroids (MBAs) and the near-Earth objects (NEOs), have dominated during the late heavy bombardment (LHB) and ever since, respectively. Aims: We formulate the lunar cratering distribution and verify the cratering asymmetries generated by the MBAs as well as the NEOs. Methods: Based on a planar model that excludes the terrestrial and lunar gravitations on the impactors and assuming the impactor encounter speed with Earth venc is higher than the lunar orbital speed vM, we rigorously integrated the lunar cratering distribution, and derived its approximation to the first order of vM/venc. Numerical simulations of lunar bombardment by the MBAs during the LHB were performed with an Earth-Moon distance aM = 20-60 Earth radii in five cases. Results: The analytical model directly proves the existence of a leading/trailing asymmetry and the absence of near/far asymmetry. The approximate form of the leading/trailing asymmetry is (1 + A1cosβ), which decreases as the apex distance β increases. The numerical simulations show evidence of a pole/equator asymmetry as well as the leading/trailing asymmetry, and the former is empirically described as (1 + A2cos2ϕ), which decreases as the latitude modulus | ϕ | increases. The amplitudes A1,2 are reliable measurements of asymmetries. Our analysis explicitly indicates the quantitative relations between cratering distribution and bombardment conditions (impactor properties and the lunar orbital status) like A1 ∝ vM/venc, resulting in a method for reproducing the bombardment conditions through measuring the asymmetry. Mutual confirmation between analytical model and numerical simulations is found in terms of the cratering distribution and its variation with aM. Estimates of A1 for crater density distributions generated by the MBAs and the NEOs are 0.101-0.159 and 0.117, respectively.

  12. 3D Staggered-Grid Finite-Difference Simulation of Acoustic Waves in Turbulent Moving Media

    NASA Astrophysics Data System (ADS)

    Symons, N. P.; Aldridge, D. F.; Marlin, D.; Wilson, D. K.; Sullivan, P.; Ostashev, V.

    2003-12-01

    Acoustic wave propagation in a three-dimensional heterogeneous moving atmosphere is accurately simulated with a numerical algorithm recently developed under the DOD Common High Performance Computing Software Support Initiative (CHSSI). Sound waves within such a dynamic environment are mathematically described by a set of four, coupled, first-order partial differential equations governing small-amplitude fluctuations in pressure and particle velocity. The system is rigorously derived from fundamental principles of continuum mechanics, ideal-fluid constitutive relations, and reasonable assumptions that the ambient atmospheric motion is adiabatic and divergence-free. An explicit, time-domain, finite-difference (FD) numerical scheme is used to solve the system for both pressure and particle velocity wavefields. The atmosphere is characterized by 3D gridded models of sound speed, mass density, and the three components of the wind velocity vector. Dependent variables are stored on staggered spatial and temporal grids, and centered FD operators possess 2nd-order and 4th-order space/time accuracy. Accurate sound wave simulation is achieved provided grid intervals are chosen appropriately. The gridding must be fine enough to reduce numerical dispersion artifacts to an acceptable level and maintain stability. The algorithm is designed to execute on parallel computational platforms by utilizing a spatial domain-decomposition strategy. Currently, the algorithm has been validated on four different computational platforms, and parallel scalability of approximately 85% has been demonstrated. Comparisons with analytic solutions for uniform and vertically stratified wind models indicate that the FD algorithm generates accurate results with either a vanishing pressure or vanishing vertical-particle velocity boundary condition. Simulations are performed using a kinematic turbulence wind profile developed with the quasi-wavelet method. In addition, preliminary results are presented using high-resolution 3D dynamic turbulent flowfields generated by a large-eddy simulation model of a stably stratified planetary boundary layer. Sandia National Laboratories is a operated by Sandia Corporation, a Lockheed Martin Company, for the USDOE under contract 94-AL85000.

  13. A domain-specific design architecture for composite material design and aircraft part redesign

    NASA Technical Reports Server (NTRS)

    Punch, W. F., III; Keller, K. J.; Bond, W.; Sticklen, J.

    1992-01-01

    Advanced composites have been targeted as a 'leapfrog' technology that would provide a unique global competitive position for U.S. industry. Composites are unique in the requirements for an integrated approach to designing, manufacturing, and marketing of products developed utilizing the new materials of construction. Numerous studies extending across the entire economic spectrum of the United States from aerospace to military to durable goods have identified composites as a 'key' technology. In general there have been two approaches to composite construction: build models of a given composite materials, then determine characteristics of the material via numerical simulation and empirical testing; and experience-directed construction of fabrication plans for building composites with given properties. The first route sets a goal to capture basic understanding of a device (the composite) by use of a rigorous mathematical model; the second attempts to capture the expertise about the process of fabricating a composite (to date) at a surface level typically expressed in a rule based system. From an AI perspective, these two research lines are attacking distinctly different problems, and both tracks have current limitations. The mathematical modeling approach has yielded a wealth of data but a large number of simplifying assumptions are needed to make numerical simulation tractable. Likewise, although surface level expertise about how to build a particular composite may yield important results, recent trends in the KBS area are towards augmenting surface level problem solving with deeper level knowledge. Many of the relative advantages of composites, e.g., the strength:weight ratio, is most prominent when the entire component is designed as a unitary piece. The bottleneck in undertaking such unitary design lies in the difficulty of the re-design task. Designing the fabrication protocols for a complex-shaped, thick section composite are currently very difficult. It is in fact this difficulty that our research will address.

  14. Fracture Propagation, Fluid Flow, and Geomechanics of Water-Based Hydraulic Fracturing in Shale Gas Systems and Electromagnetic Geophysical Monitoring of Fluid Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihoon; Um, Evan; Moridis, George

    2014-12-01

    We investigate fracture propagation induced by hydraulic fracturing with water injection, using numerical simulation. For rigorous, full 3D modeling, we employ a numerical method that can model failure resulting from tensile and shear stresses, dynamic nonlinear permeability, leak-off in all directions, and thermo-poro-mechanical effects with the double porosity approach. Our numerical results indicate that fracture propagation is not the same as propagation of the water front, because fracturing is governed by geomechanics, whereas water saturation is determined by fluid flow. At early times, the water saturation front is almost identical to the fracture tip, suggesting that the fracture is mostlymore » filled with injected water. However, at late times, advance of the water front is retarded compared to fracture propagation, yielding a significant gap between the water front and the fracture top, which is filled with reservoir gas. We also find considerable leak-off of water to the reservoir. The inconsistency between the fracture volume and the volume of injected water cannot properly calculate the fracture length, when it is estimated based on the simple assumption that the fracture is fully saturated with injected water. As an example of flow-geomechanical responses, we identify pressure fluctuation under constant water injection, because hydraulic fracturing is itself a set of many failure processes, in which pressure consistently drops when failure occurs, but fluctuation decreases as the fracture length grows. We also study application of electromagnetic (EM) geophysical methods, because these methods are highly sensitive to changes in porosity and pore-fluid properties due to water injection into gas reservoirs. Employing a 3D finite-element EM geophysical simulator, we evaluate the sensitivity of the crosswell EM method for monitoring fluid movements in shaly reservoirs. For this sensitivity evaluation, reservoir models are generated through the coupled flow-geomechanical simulator and are transformed via a rock-physics model into electrical conductivity models. It is shown that anomalous conductivity distribution in the resulting models is closely related to injected water saturation, but not closely related to newly created unsaturated fractures. Our numerical modeling experiments demonstrate that the crosswell EM method can be highly sensitive to conductivity changes that directly indicate the migration pathways of the injected fluid. Accordingly, the EM method can serve as an effective monitoring tool for distribution of injected fluids (i.e., migration pathways) during hydraulic fracturing operations« less

  15. Metrology of deep trench etched memory structures using 3D scatterometry

    NASA Astrophysics Data System (ADS)

    Reinig, Peter; Dost, Rene; Moert, Manfred; Hingst, Thomas; Mantz, Ulrich; Moffitt, Jasen; Shakya, Sushil; Raymond, Christopher J.; Littau, Mike

    2005-05-01

    Scatterometry is receiving considerable attention as an emerging optical metrology in the silicon industry. One area of progress in deploying these powerful measurements in process control is performing measurements on real device structures, as opposed to limiting scatterometry measurements to periodic structures, such as line-space gratings, placed in the wafer scribe. In this work we will discuss applications of 3D scatterometry to the measurement of advanced trench memory devices. This is a challenging and complex scatterometry application that requires exceptionally high-performance computational abilities. In order to represent the physical device, the relatively tall structures require a high number of slices in the rigorous coupled wave analysis (RCWA) theoretical model. This is complicated further by the presence of an amorphous silicon hard mask on the surface, which is highly sensitive to reflectance scattering and therefore needs to be modeled in detail. The overall structure is comprised of several layers, with the trenches presenting a complex bow-shape sidewall that must be measured. Finally, the double periodicity in the structures demands significantly greater computational capabilities. Our results demonstrate that angular scatterometry is sensitive to the key parameters of interest. The influence of further model parameters and parameter cross correlations have to be carefully taken into account. Profile results obtained by non-library optimization methods compare favorably with cross-section SEM images. Generating a model library suitable for process control, which is preferred for precision, presents numerical throughput challenges. Details will be discussed regarding library generation approaches and strategies for reducing the numerical overhead. Scatterometry and SEM results will be compared, leading to conclusions about the feasibility of this advanced application.

  16. Computations of Lifshitz-van der Waals interaction energies between irregular particles and surfaces at all separations for resuspension modelling

    NASA Astrophysics Data System (ADS)

    Priye, Aashish; Marlow, William H.

    2013-10-01

    The phenomenon of particle resuspension plays a vital role in numerous fields. Among many aspects of particle resuspension dynamics, a dominant concern is the accurate description and formulation of the van der Waals (vdW) interactions between the particle and substrate. Current models treat adhesion by incorporating a material-dependent Hamaker's constant which relies on the heuristic Hamaker's two-body interactions. However, this assumption of pairwise summation of interaction energies can lead to significant errors in condensed matter as it does not take into account the many-body interaction and retardation effects. To address these issues, an approach based on Lifshitz continuum theory of vdW interactions has been developed to calculate the principal many-body interactions between arbitrary geometries at all separation distances to a high degree of accuracy through Lifshitz's theory. We have applied this numerical implementation to calculate the many-body vdW interactions between spherical particles and surfaces with sinusoidally varying roughness profile and also to non-spherical particles (cubes, cylinders, tetrahedron etc) orientated differently with respect to the surface. Our calculations revealed that increasing the surface roughness amplitude decreases the adhesion force and non-spherical particles adhere to the surfaces more strongly when their flatter sides are oriented towards the surface. Such practical shapes and structures of particle-surface systems have not been previously considered in resuspension models and this rigorous treatment of vdW interactions provides more realistic adhesion forces between the particle and the surface which can then be coupled with computational fluid dynamics models to improve the predictive capabilities of particle resuspension dynamics.

  17. Crack propagation and arrest in CFRP materials with strain softening regions

    NASA Astrophysics Data System (ADS)

    Dilligan, Matthew Anthony

    Understanding the growth and arrest of cracks in composite materials is critical for their effective utilization in fatigue-sensitive and damage susceptible applications such as primary aircraft structures. Local tailoring of the laminate stack to provide crack arrest capacity intermediate to major structural components has been investigated and demonstrated since some of the earliest efforts in composite aerostructural design, but to date no rigorous model of the crack arrest mechanism has been developed to allow effective sizing of these features. To address this shortcoming, the previous work in the field is reviewed, with particular attention to the analysis methodologies proposed for similar arrest features. The damage and arrest processes active in such features are investigated, and various models of these processes are discussed and evaluated. Governing equations are derived based on a proposed mechanistic model of the crack arrest process. The derived governing equations are implemented in a numerical model, and a series of simulations are performed to ascertain the general characteristics of the proposed model and allow qualitative comparison to existing experimental results. The sensitivity of the model and the arrest process to various parameters is investigated, and preliminary conclusions regarding the optimal feature configuration are developed. To address deficiencies in the available material and experimental data, a series of coupon tests are developed and conducted covering a range of arrest zone configurations. Test results are discussed and analyzed, with a particular focus on identification of the proposed failure and arrest mechanisms. Utilizing the experimentally derived material properties, the tests are reproduced with both the developed numerical tool as well as a FEA-based implementation of the arrest model. Correlation between the simulated and experimental results is analyzed, and future avenues of investigation are identified. Utilizing the developed model, a sensitivity study is conducted to assess the current proposed arrest configuration. Optimum distribution and sizing of the arrest zones is investigated, and general design guidelines are developed.

  18. Mathematical Rigor vs. Conceptual Change: Some Early Results

    NASA Astrophysics Data System (ADS)

    Alexander, W. R.

    2003-05-01

    Results from two different pedagogical approaches to teaching introductory astronomy at the college level will be presented. The first of these approaches is a descriptive, conceptually based approach that emphasizes conceptual change. This descriptive class is typically an elective for non-science majors. The other approach is a mathematically rigorous treatment that emphasizes problem solving and is designed to prepare students for further study in astronomy. The mathematically rigorous class is typically taken by science majors. It also fulfills an elective science requirement for these science majors. The Astronomy Diagnostic Test version 2 (ADT 2.0) was used as an assessment instrument since the validity and reliability have been investigated by previous researchers. The ADT 2.0 was administered as both a pre-test and post-test to both groups. Initial results show no significant difference between the two groups in the post-test. However, there is a slightly greater improvement for the descriptive class between the pre and post testing compared to the mathematically rigorous course. There was great care to account for variables. These variables included: selection of text, class format as well as instructor differences. Results indicate that the mathematically rigorous model, doesn't improve conceptual understanding any better than the conceptual change model. Additional results indicate that there is a similar gender bias in favor of males that has been measured by previous investigators. This research has been funded by the College of Science and Mathematics at James Madison University.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidler, Rolf, E-mail: rsidler@gmail.com; Carcione, José M.; Holliger, Klaus

    We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in themore » radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.« less

  20. From virtual clustering analysis to self-consistent clustering analysis: a mathematical study

    NASA Astrophysics Data System (ADS)

    Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam

    2018-03-01

    In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.

  1. Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites

    NASA Astrophysics Data System (ADS)

    Vashkovyaka, M. A.; Zaslavskii, G. S.

    2016-09-01

    We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.

  2. An advanced model of heat and mass transfer in the protective clothing - verification

    NASA Astrophysics Data System (ADS)

    Łapka, P.; Furmański, P.

    2016-09-01

    The paper presents an advanced mathematical and numerical models of heat and mass transfer in the multi-layers protective clothing and in elements of the experimental stand subjected to either high surroundings temperature or high radiative heat flux emitted by hot objects. The model included conductive-radiative heat transfer in the hygroscopic porous fabrics and air gaps as well as conductive heat transfer in components of the stand. Additionally, water vapour diffusion in the pores and air spaces as well as phase transition of the bound water in the fabric fibres (sorption and desorption) were accounted for. The thermal radiation was treated in the rigorous way e.g.: semi-transparent absorbing, emitting and scattering fabrics were assumed a non-grey and all optical phenomena at internal or external walls were modelled. The air was assumed transparent. Complex energy and mass balance as well as optical conditions at internal or external interfaces were formulated in order to find exact values of temperatures, vapour densities and radiation intensities at these interfaces. The obtained highly non-linear coupled system of discrete equation was solve by the in-house iterative algorithm which was based on the Finite Volume Method. The model was then successfully partially verified against the results obtained from commercial software for simplified cases.

  3. Averaging Theory for Description of Environmental Problems: What Have We Learned?

    PubMed Central

    Miller, Cass T.; Schrefler, Bernhard A.

    2012-01-01

    Advances in Water Resources has been a prime archival source for implementation of averaging theories in changing the scale at which processes of importance in environmental modeling are described. Thus in celebration of the 35th year of this journal, it seems appropriate to assess what has been learned about these theories and about their utility in describing systems of interest. We review advances in understanding and use of averaging theories to describe porous medium flow and transport at the macroscale, an averaged scale that models spatial variability, and at the megascale, an integral scale that only considers time variation of system properties. We detail physical insights gained from the development and application of averaging theory for flow through porous medium systems and for the behavior of solids at the macroscale. We show the relationship between standard models that are typically applied and more rigorous models that are derived using modern averaging theory. We discuss how the results derived from averaging theory that are available can be built upon and applied broadly within the community. We highlight opportunities and needs that exist for collaborations among theorists, numerical analysts, and experimentalists to advance the new classes of models that have been derived. Lastly, we comment on averaging developments for rivers, estuaries, and watersheds. PMID:23393409

  4. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    NASA Astrophysics Data System (ADS)

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  5. Toward an Integrative Understanding of Social Behavior: New Models and New Opportunities

    PubMed Central

    Blumstein, Daniel T.; Ebensperger, Luis A.; Hayes, Loren D.; Vásquez, Rodrigo A.; Ahern, Todd H.; Burger, Joseph Robert; Dolezal, Adam G.; Dosmann, Andy; González-Mariscal, Gabriela; Harris, Breanna N.; Herrera, Emilio A.; Lacey, Eileen A.; Mateo, Jill; McGraw, Lisa A.; Olazábal, Daniel; Ramenofsky, Marilyn; Rubenstein, Dustin R.; Sakhai, Samuel A.; Saltzman, Wendy; Sainz-Borgo, Cristina; Soto-Gamboa, Mauricio; Stewart, Monica L.; Wey, Tina W.; Wingfield, John C.; Young, Larry J.

    2010-01-01

    Social interactions among conspecifics are a fundamental and adaptively significant component of the biology of numerous species. Such interactions give rise to group living as well as many of the complex forms of cooperation and conflict that occur within animal groups. Although previous conceptual models have focused on the ecological causes and fitness consequences of variation in social interactions, recent developments in endocrinology, neuroscience, and molecular genetics offer exciting opportunities to develop more integrated research programs that will facilitate new insights into the physiological causes and consequences of social variation. Here, we propose an integrative framework of social behavior that emphasizes relationships between ultimate-level function and proximate-level mechanism, thereby providing a foundation for exploring the full diversity of factors that underlie variation in social interactions, and ultimately sociality. In addition to identifying new model systems for the study of human psychopathologies, this framework provides a mechanistic basis for predicting how social behavior will change in response to environmental variation. We argue that the study of non-model organisms is essential for implementing this integrative model of social behavior because such species can be studied simultaneously in the lab and field, thereby allowing integration of rigorously controlled experimental manipulations with detailed observations of the ecological contexts in which interactions among conspecifics occur. PMID:20661457

  6. Modeling time-coincident ultrafast electron transfer and solvation processes at molecule-semiconductor interfaces

    NASA Astrophysics Data System (ADS)

    Li, Lesheng; Giokas, Paul G.; Kanai, Yosuke; Moran, Andrew M.

    2014-06-01

    Kinetic models based on Fermi's Golden Rule are commonly employed to understand photoinduced electron transfer dynamics at molecule-semiconductor interfaces. Implicit in such second-order perturbative descriptions is the assumption that nuclear relaxation of the photoexcited electron donor is fast compared to electron injection into the semiconductor. This approximation breaks down in systems where electron transfer transitions occur on 100-fs time scale. Here, we present a fourth-order perturbative model that captures the interplay between time-coincident electron transfer and nuclear relaxation processes initiated by light absorption. The model consists of a fairly small number of parameters, which can be derived from standard spectroscopic measurements (e.g., linear absorbance, fluorescence) and/or first-principles electronic structure calculations. Insights provided by the model are illustrated for a two-level donor molecule coupled to both (i) a single acceptor level and (ii) a density of states (DOS) calculated for TiO2 using a first-principles electronic structure theory. These numerical calculations show that second-order kinetic theories fail to capture basic physical effects when the DOS exhibits narrow maxima near the energy of the molecular excited state. Overall, we conclude that the present fourth-order rate formula constitutes a rigorous and intuitive framework for understanding photoinduced electron transfer dynamics that occur on the 100-fs time scale.

  7. Effect of liquid droplets on turbulence in a round gaseous jet

    NASA Technical Reports Server (NTRS)

    Mostafa, A. A.; Elghobashi, S. E.

    1986-01-01

    The main objective of this investigation is to develop a two-equation turbulence model for dilute vaporizing sprays or in general for dispersed two-phase flows including the effects of phase changes. The model that accounts for the interaction between the two phases is based on rigorously derived equations for turbulence kinetic energy (K) and its dissipation rate epsilon of the carrier phase using the momentum equation of that phase. Closure is achieved by modeling the turbulent correlations, up to third order, in the equations of the mean motion, concentration of the vapor in the carrier phase, and the kinetic energy of turbulence and its dissipation rate for the carrier phase. The governing equations are presented in both the exact and the modeled formes. The governing equations are solved numerically using a finite-difference procedure to test the presented model for the flow of a turbulent axisymmetric gaseous jet laden with either evaporating liquid droplets or solid particles. The predictions include the distribution of the mean velocity, volume fractions of the different phases, concentration of the evaporated material in the carrier phase, turbulence intensity and shear stress of the carrier phase, droplet diameter distribution, and the jet spreading rate. The predictions are in good agreement with the experimental data.

  8. A topological proof of chaos for two nonlinear heterogeneous triopoly game models.

    PubMed

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  9. Analysis of a model of gambiense sleeping sickness in humans and cattle.

    PubMed

    Ndondo, A M; Munganga, J M W; Mwambakana, J N; Saad-Roy, C M; van den Driessche, P; Walo, R O

    2016-01-01

    Human African Trypanosomiasis (HAT) and Nagana in cattle, commonly called sleeping sickness, is caused by trypanosome protozoa transmitted by bites of infected tsetse flies. We present a deterministic model for the transmission of HAT caused by Trypanosoma brucei gambiense between human hosts, cattle hosts and tsetse flies. The model takes into account the growth of the tsetse fly, from its larval stage to the adult stage. Disease in the tsetse fly population is modeled by three compartments, and both the human and cattle populations are modeled by four compartments incorporating the two stages of HAT. We provide a rigorous derivation of the basic reproduction number R0. For R0 < 1, the disease free equilibrium is globally asymptotically stable, thus HAT dies out; whereas (assuming no return to susceptibility) for R0 >1, HAT persists. Elasticity indices for R0 with respect to different parameters are calculated with baseline parameter values appropriate for HAT in West Africa; indicating parameters that are important for control strategies to bring R0 below 1. Numerical simulations with R0 > 1 show values for the infected populations at the endemic equilibrium, and indicate that with certain parameter values, HAT could not persist in the human population in the absence of cattle.

  10. Analytical modeling of conformal mantle cloaks for cylindrical objects using sub-wavelength printed and slotted arrays

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-08-01

    Following the idea of "cloaking by a surface" [A. Alù, Phys. Rev. B 80, 245115 (2009); P. Y. Chen and A. Alù, Phys. Rev. B 84, 205110 (2011)], we present a rigorous analytical model applicable to mantle cloaking of cylindrical objects using 1D and 2D sub-wavelength conformal frequency selective surface (FSS) elements. The model is based on Lorenz-Mie scattering theory which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. The FSS arrays considered in this work are composed of 1D horizontal and vertical metallic strips and 2D printed (patches, Jerusalem crosses, and cross dipoles) and slotted structures (meshes, slot-Jerusalem crosses, and slot-cross dipoles). It is shown that the analytical grid-impedance expressions derived for the planar arrays of sub-wavelength elements may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks. By properly tailoring the surface reactance of the cloak, the total scattering from the cylinder can be significantly reduced, thus rendering the object invisible over the range of frequencies of interest (i.e., at microwaves and far-infrared). The results obtained using our analytical model for mantle cloaks are validated against full-wave numerical simulations.

  11. Numerical study of insect free hovering flight

    NASA Astrophysics Data System (ADS)

    Wu, Di; Yeo, Khoon Seng; Lim, Tee Tai; Fluid lab, Mechanical Engineering, National University of Singapore Team

    2012-11-01

    In this paper we present the computational fluid dynamics study of three-dimensional flow field around a free hovering fruit fly integrated with unsteady FSI analysis and the adaptive flight control system for the first time. The FSI model being specified for fruitfly hovering is achieved by coupling a structural problem based on Newton's second law with a rigorous CFD solver concerning generalized finite difference method. In contrast to the previous hovering flight research, the wing motion employed here is not acquired from experimental data but governed by our proposed control systems. Two types of hovering control strategies i.e. stroke plane adjustment mode and paddling mode are explored, capable of generating the fixed body position and orientation characteristic of hovering flight. Hovering flight associated with multiple wing kinematics and body orientations are shown as well, indicating the means by which fruitfly actually maintains hovering may have considerable freedom and therefore might be influenced by many other factors beyond the physical and aerodynamic requirements. Additionally, both the near- and far-field flow and vortex structure agree well with the results from other researchers, demonstrating the reliability of our current model.

  12. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.

  13. Developmental engineering: a new paradigm for the design and manufacturing of cell-based products. Part II: from genes to networks: tissue engineering from the viewpoint of systems biology and network science.

    PubMed

    Lenas, Petros; Moos, Malcolm; Luyten, Frank P

    2009-12-01

    The field of tissue engineering is moving toward a new concept of "in vitro biomimetics of in vivo tissue development." In Part I of this series, we proposed a theoretical framework integrating the concepts of developmental biology with those of process design to provide the rules for the design of biomimetic processes. We named this methodology "developmental engineering" to emphasize that it is not the tissue but the process of in vitro tissue development that has to be engineered. To formulate the process design rules in a rigorous way that will allow a computational design, we should refer to mathematical methods to model the biological process taking place in vitro. Tissue functions cannot be attributed to individual molecules but rather to complex interactions between the numerous components of a cell and interactions between cells in a tissue that form a network. For tissue engineering to advance to the level of a technologically driven discipline amenable to well-established principles of process engineering, a scientifically rigorous formulation is needed of the general design rules so that the behavior of networks of genes, proteins, or cells that govern the unfolding of developmental processes could be related to the design parameters. Now that sufficient experimental data exist to construct plausible mathematical models of many biological control circuits, explicit hypotheses can be evaluated using computational approaches to facilitate process design. Recent progress in systems biology has shown that the empirical concepts of developmental biology that we used in Part I to extract the rules of biomimetic process design can be expressed in rigorous mathematical terms. This allows the accurate characterization of manufacturing processes in tissue engineering as well as the properties of the artificial tissues themselves. In addition, network science has recently shown that the behavior of biological networks strongly depends on their topology and has developed the necessary concepts and methods to describe it, allowing therefore a deeper understanding of the behavior of networks during biomimetic processes. These advances thus open the door to a transition for tissue engineering from a substantially empirical endeavor to a technology-based discipline comparable to other branches of engineering.

  14. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  15. Experimental And Numerical Evaluation Of Gaseous Agents For Suppressing Cup-Burner Flames In Low Gravity

    NASA Technical Reports Server (NTRS)

    Takahashi, Fumiaki; Linteris, Gregory T.; Katta, Viswanath R.

    2003-01-01

    Longer duration missions to the moon, to Mars, and on the International Space Station (ISS) increase the likelihood of accidental fires. NASA's fire safety program for human-crewed space flight is based largely on removing ignition sources and controlling the flammability of the material on-board. There is ongoing research to improve the flammability characterization of materials in low gravity; however, very little research has been conducted on fire suppression in the low-gravity environment. Although the existing suppression systems aboard the Space Shuttle (halon 1301, CF3Br) and the ISS (CO2 or water-based form) may continue to be used, alternative effective agents or techniques are desirable for long-duration missions. The goal of the present investigation is to: (1) understand the physical and chemical processes of fire suppression in various gravity and O2 levels simulating spacecraft, Mars, and moon missions; (2) provide rigorous testing of analytical models, which include detailed combustion-suppression chemistry and radiation sub-models, so that the model can be used to interpret (and predict) the suppression behavior in low gravity; and (3) provide basic research results useful for advances in space fire safety technology, including new fire-extinguishing agents and approaches.

  16. Crustal fingering: solidification on a viscously unstable interface

    NASA Astrophysics Data System (ADS)

    Fu, Xiaojing; Jimenez-Martinez, Joaquin; Cueto-Felgueroso, Luis; Porter, Mark; Juanes, Ruben

    2017-11-01

    Motivated by the formation of gas hydrates in seafloor sediments, here we study the volumetric expansion of a less viscous gas pocket into a more viscous liquid when the gas-liquid interfaces readily solidify due to hydrate formation. We first present a high-pressure microfluidic experiment to study the depressurization-controlled expansion of a Xenon gas pocket in a water-filled Hele-Shaw cell. The evolution of the pocket is controlled by three processes: (1) volumetric expansion of the gas; (2) rupturing of existing hydrate films on the gas-liquid interface; and (3) formation of new hydrate films. These result in gas fingering leading to a complex labyrinth pattern. To reproduce these observations, we propose a phase-field model that describes the formation of hydrate shell on viscously unstable interfaces. We design the free energy of the three-phase system to rigorously account for interfacial effects, gas compressibility and phase transitions. We model the hydrate shell as a highly viscous fluid with shear-thinning rheology to reproduce shell-rupturing behavior. We present high-resolution numerical simulations of the model, which illustrate the emergence of complex crustal fingering patterns as a result of gas expansion dynamics modulated by hydrate growth at the interface.

  17. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review

    DOE PAGES

    Zuo, Chao; Huang, Lei; Zhang, Minliang; ...

    2016-05-06

    In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based onmore » a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.« less

  18. Numerical Modeling of Sliding Stability of RCC dam

    NASA Astrophysics Data System (ADS)

    Mughieda, O.; Hazirbaba, K.; Bani-Hani, K.; Daoud, W.

    2017-06-01

    Stability and stress analyses are the most important elements that require rigorous consideration in design of a dam structure. Stability of dams against sliding is crucial due to the substantial horizontal load that requires sufficient and safe resistance to develop by mobilization of adequate shearing forces along the base of the dam foundation. In the current research, the static sliding stability of a roller-compacted-concrete (RCC) dam was modelled using finite element method to investigate the stability against sliding. A commercially available finite element software (SAP 2000) was used to analyze stresses in the body of the dam and foundation. A linear finite element static analysis was performed in which a linear plane strain isoperimetric four node elements was used for modelling the dam-foundation system. The analysis was carried out assuming that no slip will occur at the interface between the dam and the foundation. Usual static loading condition was applied for the static analysis. The greatest tension was found to develop in the rock adjacent to the toe of the upstream slope. The factor of safety against sliding along the entire base of the dam was found to be greater than 1 (FS>1), for static loading conditions.

  19. Line-source excitation of realistic conformal metasurface cloaks

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-11-01

    Following our recently introduced analytical tools to model and design conformal mantle cloaks based on metasurfaces [Padooru et al., J. Appl. Phys. 112, 034907 (2012)], we investigate their performance and physical properties when excited by an electric line source placed in their close proximity. We consider metasurfaces formed by 2-D arrays of slotted (meshes and Jerusalem cross slots) and printed (patches and Jerusalem crosses) sub-wavelength elements. The electromagnetic scattering analysis is carried out using a rigorous analytical model, which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. It is shown that the homogenized grid-impedance expressions, originally derived for planar arrays of sub-wavelength elements and plane-wave excitation, may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks illuminated by near-field sources. Our closed-form analytical results are in good agreement with full-wave numerical simulations, up to sub-wavelength distances from the metasurface, confirming that mantle cloaks may be very effective to suppress the scattering of moderately sized objects, independent of the type of excitation and point of observation. We also discuss the dual functionality of these metasurfaces to boost radiation efficiency and directivity from confined near-field sources.

  20. Validation of the Jarzynski relation for a system with strong thermal coupling: an isothermal ideal gas model.

    PubMed

    Baule, A; Evans, R M L; Olmsted, P D

    2006-12-01

    We revisit the paradigm of an ideal gas under isothermal conditions. A moving piston performs work on an ideal gas in a container that is strongly coupled to a heat reservoir. The thermal coupling is modeled by stochastic scattering at the boundaries. In contrast to recent studies of an adiabatic ideal gas with a piston [R.C. Lua and A.Y. Grosberg, J. Phys. Chem. B 109, 6805 (2005); I. Bena, Europhys. Lett. 71, 879 (2005)], the container and piston stay in contact with the heat bath during the work process. Under this condition the heat reservoir as well as the system depend on the work parameter lambda and microscopic reversibility is broken for a moving piston. Our model is thus not included in the class of systems for which the nonequilibrium work theorem has been derived rigorously either by Hamiltonian [C. Jarzynski, J. Stat. Mech. (2004) P09005] or stochastic methods [G.E. Crooks, J. Stat. Phys. 90, 1481 (1998)]. Nevertheless the validity of the nonequilibrium work theorem is confirmed both numerically for a wide range of parameter values and analytically in the limit of a very fast moving piston, i.e., in the far nonequilibrium regime.

  1. Redefinition of the self-bias voltage in a dielectrically shielded thin sheath RF discharge

    NASA Astrophysics Data System (ADS)

    Ho, Teck Seng; Charles, Christine; Boswell, Rod

    2018-05-01

    In a geometrically asymmetric capacitively coupled discharge where the powered electrode is shielded from the plasma by a layer of dielectric material, the self-bias manifests as a nonuniform negative charging in the dielectric rather than on the blocking capacitor. In the thin sheath regime where the ion transit time across the powered sheath is on the order of or less than the Radiofrequency (RF) period, the plasma potential is observed to respond asymmetrically to extraneous impedances in the RF circuit. Consequently, the RF waveform on the plasma-facing surface of the dielectric is unknown, and the behaviour of the powered sheath is not easily predictable. Sheath circuit models become inadequate for describing this class of discharges, and a comprehensive fluid, electrical, and plasma numerical model is employed to accurately quantify this behaviour. The traditional definition of the self-bias voltage as the mean of the RF waveform is shown to be erroneous in this regime. Instead, using the maxima of the RF waveform provides a more rigorous definition given its correlation with the ion dynamics in the powered sheath. This is supported by a RF circuit model derived from the computational fluid dynamics and plasma simulations.

  2. The Transfer of Resonance Line Polarization with Partial Frequency Redistribution in the General Hanle–Zeeman Regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, E. Alsina; Bueno, J. Trujillo; Belluzzi, L., E-mail: ealsina@iac.es

    2017-02-10

    The spectral line polarization encodes a wealth of information about the thermal and magnetic properties of the solar atmosphere. Modeling the Stokes profiles of strong resonance lines is, however, a complex problem both from a theoretical and computational point of view, especially when partial frequency redistribution (PRD) effects need to be taken into account. In this work, we consider a two-level atom in the presence of magnetic fields of arbitrary intensity (Hanle–Zeeman regime) and orientation, both deterministic and micro-structured. Working within the framework of a rigorous PRD theoretical approach, we have developed a numerical code that solves the full non-LTEmore » radiative transfer problem for polarized radiation, in one-dimensional models of the solar atmosphere, accounting for the combined action of the Hanle and Zeeman effects, as well as for PRD phenomena. After briefly discussing the relevant equations, we describe the iterative method of solution of the problem and the numerical tools that we have developed and implemented. We finally present some illustrative applications to two resonance lines that form at different heights in the solar atmosphere, and provide a detailed physical interpretation of the calculated Stokes profiles. We find that magneto-optical effects have a strong impact on the linear polarization signals that PRD effects produce in the wings of strong resonance lines. We also show that the weak-field approximation has to be used with caution when PRD effects are considered.« less

  3. Final Technical Report - SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, Dalton D.

    Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less

  4. A Coupled Multiphysics Approach for Simulating Induced Seismicity, Ground Acceleration and Structural Damage

    NASA Astrophysics Data System (ADS)

    Podgorney, Robert; Coleman, Justin; Wilkins, Amdrew; Huang, Hai; Veeraraghavan, Swetha; Xia, Yidong; Permann, Cody

    2017-04-01

    Numerical modeling has played an important role in understanding the behavior of coupled subsurface thermal-hydro-mechanical (THM) processes associated with a number of energy and environmental applications since as early as the 1970s. While the ability to rigorously describe all key tightly coupled controlling physics still remains a challenge, there have been significant advances in recent decades. These advances are related primarily to the exponential growth of computational power, the development of more accurate equations of state, improvements in the ability to represent heterogeneity and reservoir geometry, and more robust nonlinear solution schemes. The work described in this paper documents the development and linkage of several fully-coupled and fully-implicit modeling tools. These tools simulate: (1) the dynamics of fluid flow, heat transport, and quasi-static rock mechanics; (2) seismic wave propagation from the sources of energy release through heterogeneous material; and (3) the soil-structural damage resulting from ground acceleration. These tools are developed in Idaho National Laboratory's parallel Multiphysics Object Oriented Simulation Environment, and are integrated together using a global implicit approach. The governing equations are presented, the numerical approach for simultaneously solving and coupling the three coupling physics tools is discussed, and the data input and output methodology is outlined. An example is presented to demonstrate the capabilities of the coupled multiphysics approach. The example involves simulating a system conceptually similar to the geothermal development in Basel Switzerland, and the resultant induced seismicity, ground motion and structural damage is predicted.

  5. Conflict: Operational Realism versus Analytical Rigor in Defense Modeling and Simulation

    DTIC Science & Technology

    2012-06-14

    Campbell, Experimental and Quasi- Eperimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin Company, 2002. [7] R. T. Johnson, G...experimentation? In order for an experiment to be considered rigorous, and the results valid, the experiment should be designed using established...addition to the interview, the pilots were administered a written survey, designed to capture their reactions regarding the level of realism present

  6. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  7. Towards a rigorous mesoscale modeling of reactive flow and transport in an evolving porous medium and its applications to soil science

    NASA Astrophysics Data System (ADS)

    Ray, Nadja; Rupp, Andreas; Knabner, Peter

    2016-04-01

    Soil is arguably the most prominent example of a natural porous medium that is composed of a porous matrix and a pore space. Within this framework and in terms of soil's heterogeneity, we first consider transport and fluid flow at the pore scale. From there, we develop a mechanistic model and upscale it mathematically to transfer our model from the small scale to that of the mesoscale (laboratory scale). The mathematical framework of (periodic) homogenization (in principal) rigorously facilitates such processes by exactly computing the effective coefficients/parameters by means of the pore geometry and processes. In our model, various small-scale soil processes may be taken into account: molecular diffusion, convection, drift emerging from electric forces, and homogeneous reactions of chemical species in a solvent. Additionally, our model may consider heterogeneous reactions at the porous matrix, thus altering both the porosity and the matrix. Moreover, our model may additionally address biophysical processes, such as the growth of biofilms and how this affects the shape of the pore space. Both of the latter processes result in an intrinsically variable soil structure in space and time. Upscaling such models under the assumption of a locally periodic setting must be performed meticulously to preserve information regarding the complex coupling of processes in the evolving heterogeneous medium. Generally, a micro-macro model emerges that is then comprised of several levels of couplings: Macroscopic equations that describe the transport and fluid flow at the scale of the porous medium (mesoscale) include averaged time- and space-dependent coefficient functions. These functions may be explicitly computed by means of auxiliary cell problems (microscale). Finally, the pore space in which the cell problems are defined is time- and space dependent and its geometry inherits information from the transport equation's solutions. Numerical computations using mixed finite elements and potentially random initial data, e.g. that of porosity, complement our theoretical results. Our investigations contribute to the theoretical understanding of the link between soil formation and soil functions. This general framework may be applied to various problems in soil science for a range of scales, such as the formation and turnover of microaggregates or soil remediation.

  8. Effect of Pore Pressure on Slip Failure of an Impermeable Fault: A Coupled Micro Hydro-Geomechanical Model

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Juanes, R.

    2015-12-01

    The geomechanical processes associated with subsurface fluid injection/extraction is of central importance for many industrial operations related to energy and water resources. However, the mechanisms controlling the stability and slip motion of a preexisting geologic fault remain poorly understood and are critical for the assessment of seismic risk. In this work, we develop a coupled hydro-geomechanical model to investigate the effect of fluid injection induced pressure perturbation on the slip behavior of a sealing fault. The model couples single-phase flow in the pores and mechanics of the solid phase. Granular packs (see example in Fig. 1a) are numerically generated where the grains can be either bonded or not, depending on the degree of cementation. A pore network is extracted for each granular pack with pore body volumes and pore throat conductivities calculated rigorously based on geometry of the local pore space. The pore fluid pressure is solved via an explicit scheme, taking into account the effect of deformation of the solid matrix. The mechanics part of the model is solved using the discrete element method (DEM). We first test the validity of the model with regard to the classical one-dimensional consolidation problem where an analytical solution exists. We then demonstrate the ability of the coupled model to reproduce rock deformation behavior measured in triaxial laboratory tests under the influence of pore pressure. We proceed to study the fault stability in presence of a pressure discontinuity across the impermeable fault which is implemented as a plane with its intersected pore throats being deactivated and thus obstructing fluid flow (Fig. 1b, c). We focus on the onset of shear failure along preexisting faults. We discuss the fault stability criterion in light of the numerical results obtained from the DEM simulations coupled with pore fluid flow. The implication on how should faults be treated in a large-scale continuum model is also presented.

  9. New tools for Content Innovation and data sharing: Enhancing reproducibility and rigor in biomechanics research.

    PubMed

    Guilak, Farshid

    2017-03-21

    We are currently in one of the most exciting times for science and engineering as we witness unprecedented growth in our computational and experimental capabilities to generate new data and models. To facilitate data and model sharing, and to enhance reproducibility and rigor in biomechanics research, the Journal of Biomechanics has introduced a number of tools for Content Innovation to allow presentation, sharing, and archiving of methods, models, and data in our articles. The tools include an Interactive Plot Viewer, 3D Geometric Shape and Model Viewer, Virtual Microscope, Interactive MATLAB Figure Viewer, and Audioslides. Authors are highly encouraged to make use of these in upcoming journal submissions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron

    NASA Astrophysics Data System (ADS)

    Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.

    2010-07-01

    Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.

  11. A Mathematical Evaluation of the Core Conductor Model

    PubMed Central

    Clark, John; Plonsey, Robert

    1966-01-01

    This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155

  12. Performance evaluation of a bigrating as a beam splitter.

    PubMed

    Hwang, R B; Peng, S T

    1997-04-01

    The design of a bigrating for use as a beam splitter is presented. It is based on a rigorous formulation of plane-wave scattering by a bigrating that is composed of two individual gratings oriented in different directions. Numerical results are carried out to optimize the design of a bigrating to perform 1 x 4 beam splitting in two dimensions and to examine its fabrication and operation tolerances. It is found that a bigrating can be designed to perform two functions: beam splitting and polarization purification.

  13. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  14. Spacetime dynamics of a Higgs vacuum instability during inflation

    DOE PAGES

    East, William E.; Kearney, John; Shakya, Bibhushan; ...

    2017-01-31

    A remarkable prediction of the Standard Model is that, in the absence of corrections lifting the energy density, the Higgs potential becomes negative at large field values. If the Higgs field samples this part of the potential during inflation, the negative energy density may locally destabilize the spacetime. Here, we use numerical simulations of the Einstein equations to study the evolution of inflation-induced Higgs fluctuations as they grow towards the true (negative-energy) minimum. Our simulations show that forming a single patch of true vacuum in our past light cone during inflation is incompatible with the existence of our Universe; themore » boundary of the true vacuum region grows outward in a causally disconnected manner from the crunching interior, which forms a black hole. We also find that these black hole horizons may be arbitrarily elongated—even forming black strings—in violation of the hoop conjecture. Furthermore, by extending the numerical solution of the Fokker-Planck equation to the exponentially suppressed tails of the field distribution at large field values, we derive a rigorous correlation between a future measurement of the tensor-to-scalar ratio and the scale at which the Higgs potential must receive stabilizing corrections in order for the Universe to have survived inflation until today.« less

  15. ZY3-02 Laser Altimeter Footprint Geolocation Prediction

    PubMed Central

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-01-01

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test. PMID:28934160

  16. ZY3-02 Laser Altimeter Footprint Geolocation Prediction.

    PubMed

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-09-21

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test.

  17. Multispecies diffusion models: A study of uranyl species diffusion

    NASA Astrophysics Data System (ADS)

    Liu, Chongxuan; Shang, Jianying; Zachara, John M.

    2011-12-01

    Rigorous numerical description of multispecies diffusion requires coupling of species, charge, and aqueous and surface complexation reactions that collectively affect diffusive fluxes. The applicability of a fully coupled diffusion model is, however, often constrained by the availability of species self-diffusion coefficients, as well as by computational complication in imposing charge conservation. In this study, several diffusion models with variable complexity in charge and species coupling were formulated and compared to describe reactive multispecies diffusion in groundwater. Diffusion of uranyl [U(VI)] species was used as an example in demonstrating the effectiveness of the models in describing multispecies diffusion. Numerical simulations found that a diffusion model with a single, common diffusion coefficient for all species was sufficient to describe multispecies U(VI) diffusion under a steady state condition of major chemical composition, but not under transient chemical conditions. Simulations revealed that for multispecies U(VI) diffusion under transient chemical conditions, a fully coupled diffusion model could be well approximated by a component-based diffusion model when the diffusion coefficient for each chemical component was properly selected. The component-based diffusion model considers the difference in diffusion coefficients between chemical components, but not between the species within each chemical component. This treatment significantly enhanced computational efficiency at the expense of minor charge conservation. The charge balance in the component-based diffusion model can be enforced, if necessary, by adding a secondary migration term resulting from model simplification. The effect of ion activity coefficient gradients on multispecies diffusion is also discussed. The diffusion models were applied to describe U(VI) diffusive mass transfer in intragranular domains in two sediments collected from U.S. Department of Energy's Hanford 300A, where intragranular diffusion is a rate-limiting process controlling U(VI) adsorption and desorption. The grain-scale reactive diffusion model was able to describe U(VI) adsorption/desorption kinetics that had been previously described using a semiempirical, multirate model. Compared with the multirate model, the diffusion models have the advantage to provide spatiotemporal speciation evolution within the diffusion domains.

  18. A proposed study of multiple scattering through clouds up to 1 THz

    NASA Technical Reports Server (NTRS)

    Gerace, G. C.; Smith, E. K.

    1992-01-01

    A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.

  19. Quantum key distribution with an unknown and untrusted source

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  20. Circular instead of hierarchical: methodological principles for the evaluation of complex interventions

    PubMed Central

    Walach, Harald; Falkenberg, Torkel; Fønnebø, Vinjar; Lewith, George; Jonas, Wayne B

    2006-01-01

    Background The reasoning behind evaluating medical interventions is that a hierarchy of methods exists which successively produce improved and therefore more rigorous evidence based medicine upon which to make clinical decisions. At the foundation of this hierarchy are case studies, retrospective and prospective case series, followed by cohort studies with historical and concomitant non-randomized controls. Open-label randomized controlled studies (RCTs), and finally blinded, placebo-controlled RCTs, which offer most internal validity are considered the most reliable evidence. Rigorous RCTs remove bias. Evidence from RCTs forms the basis of meta-analyses and systematic reviews. This hierarchy, founded on a pharmacological model of therapy, is generalized to other interventions which may be complex and non-pharmacological (healing, acupuncture and surgery). Discussion The hierarchical model is valid for limited questions of efficacy, for instance for regulatory purposes and newly devised products and pharmacological preparations. It is inadequate for the evaluation of complex interventions such as physiotherapy, surgery and complementary and alternative medicine (CAM). This has to do with the essential tension between internal validity (rigor and the removal of bias) and external validity (generalizability). Summary Instead of an Evidence Hierarchy, we propose a Circular Model. This would imply a multiplicity of methods, using different designs, counterbalancing their individual strengths and weaknesses to arrive at pragmatic but equally rigorous evidence which would provide significant assistance in clinical and health systems innovation. Such evidence would better inform national health care technology assessment agencies and promote evidence based health reform. PMID:16796762

  1. The KP Approximation Under a Weak Coriolis Forcing

    NASA Astrophysics Data System (ADS)

    Melinand, Benjamin

    2018-02-01

    In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.

  2. How to Find a Bug in Ten Thousand Lines Transport Solver? Outline of Experiences from AN Advection-Diffusion Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F.

    2011-12-01

    Almost all natural phenomena on Earth are highly nonlinear. Even simplifications to the equations describing nature usually end up being nonlinear partial differential equations. Transport (ADR) equation is a pivotal equation in atmospheric sciences and water quality. This nonlinear equation needs to be solved numerically for practical purposes so academicians and engineers thoroughly rely on the assistance of numerical codes. Thus, numerical codes require verification before they are utilized for multiple applications in science and engineering. Model verification is a mathematical procedure whereby a numerical code is checked to assure the governing equation is properly solved as it is described in the design document. CFD verification is not a straightforward and well-defined course. Only a complete test suite can uncover all the limitations and bugs. Results are needed to be assessed to make a distinction between bug-induced-defect and innate limitation of a numerical scheme. As Roache (2009) said, numerical verification is a state-of-the-art procedure. Sometimes novel tricks work out. This study conveys the synopsis of the experiences we gained during a comprehensive verification process which was done for a transport solver. A test suite was designed including unit tests and algorithmic tests. Tests were layered in complexity in several dimensions from simple to complex. Acceptance criteria defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, handling stiff source term, spurious oscillation, and initial shape preservation. At the begining, mesh convergence study which is the main craft of the verification is performed. To that end, analytical solution of ADR equation gathered. Also a new solution was derived. In the more general cases, lack of analytical solution could be overcome through Richardson Extrapolation and Manufactured Solution. Then, two bugs which were concealed during the mesh convergence study uncovered with the method of false injection and visualization of the results. Symmetry had dual functionality: there was a bug, which was hidden due to the symmetric nature of a test (it was detected afterward utilizing artificial false injection), on the other hand self-symmetry was used to design a new test, and in a case the analytical solution of the ADR equation was unknown. Assisting subroutines designed to check and post-process conservation of mass and oscillatory behavior. Finally, capability of the solver also checked for stiff reaction source term. The above test suite not only was a decent tool of error detection but also it provided a thorough feedback on the ADR solvers limitations. Such information is the crux of any rigorous numerical modeling for a modeler who deals with surface/subsurface pollution transport.

  3. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  4. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    PubMed

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  5. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  6. Shall we upgrade one-dimensional secondary settler models used in WWTP simulators? - An assessment of model structure uncertainty and its propagation.

    PubMed

    Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A

    2011-01-01

    In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer/winter sequence. The model prediction in terms of nitrogen removal, solids inventory in the bioreactors and solids retention time as a function of the solids settling behaviour is investigated. It is found that the settler behaviour, simulated by the hyperbolic model, can introduce significant errors into the approximation of the solids retention time and thus solids inventory of the system. We demonstrate that these impacts can potentially cause deterioration of the predictive power of the biokinetic model, evidenced by an evaluation of the system's nitrogen removal efficiency. The convection-dispersion model exhibits superior behaviour, and the use of this type of model thus is highly recommended, especially bearing in mind future challenges, e.g., the explicit representation of uncertainty in WWTP models.

  7. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  8. Potential flow about arbitrary biplane wing sections

    NASA Technical Reports Server (NTRS)

    Garrick, I E

    1937-01-01

    A rigorous treatment is given of the problem of determining the two-dimensional potential flow around arbitrary biplane cellules. The analysis involves the use of elliptic functions and is sufficiently general to include the effects of such elements as the section shapes, the chord ratio, gap, stagger, and decalage, which elements may be specified arbitrarily. The flow problem is resolved by making use of the methods of conformal representation. Thus the solution of the problem of transforming conformally two arbitrary contours into two circles is expressed by a pair of simultaneous integral equations, for which a method of numerical solution is outlined. As an example of the numerical process, the pressure distribution over certain arrangements of the NACA 4412 airfoil in biplane combinations is presented and compared with the monoplane pressure distribution.

  9. Optical simulations of organic light-emitting diodes through a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods

    NASA Astrophysics Data System (ADS)

    Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Rob; Gregory, G. Groot

    2014-09-01

    Over the last two decades there has been extensive research done to improve the design of Organic Light Emitting Diodes (OLEDs) so as to enhance light extraction efficiency, improve beam shaping, and allow color tuning through techniques such as the use of patterned substrates, photonic crystal (PCs) gratings, back reflectors, surface texture, and phosphor down-conversion. Computational simulation has been an important tool for examining these increasingly complex designs. It has provided insights for improving OLED performance as a result of its ability to explore limitations, predict solutions, and demonstrate theoretical results. Depending upon the focus of the design and scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics based techniques, such as finite-difference time-domain (FDTD) and rigorous coupled wave analysis (RCWA), or through ray optics based technique such as Monte Carlo ray-tracing. The former are typically used for modeling nanostructures on the OLED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor down-conversion. This paper presents the use of a mixed-level simulation approach which unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave based tools to characterize the nanostructured die and generate both a Bidirectional Scattering Distribution function (BSDF) and a far-field angular intensity distribution. These characteristics are then incorporated into the ray-tracing simulator to obtain the overall performance. Such mixed-level approach allows for comprehensive modeling of the optical characteristic of OLEDs and can potentially lead to more accurate performance than that from individual modeling tools alone.

  10. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  11. Numerical Simulations of Blood Flows in the Left Atrium

    NASA Astrophysics Data System (ADS)

    Zhang, Lucy

    2008-11-01

    A novel numerical technique of solving complex fluid-structure interactions for biomedical applications is introduced. The method is validated through rigorous convergence and accuracy tests. In this study, the technique is specifically used to study blood flows in the left atrium, one of the four chambers in the heart. Stable solutions are obtained at physiologic Reynolds numbers by applying pulmonary venous inflow, mitral valve outflow and appropriate constitutive equations to closely mimic the behaviors of biomaterials. Atrial contraction is also implemented as a time-dependent boundary condition to realistically describe the atrial wall muscle movements, thus producing accurate interactions with the surrounding blood. From our study, the transmitral velocity, filling/emptying velocity ratio, durations and strengths of vortices are captured numerically for sinus rhythms (healthy heart beat) and they compare quite well with reported clinical studies. The solution technique can be further used to study heart diseases such as the atrial fibrillation, thrombus formation in the chamber and their corresponding effects in blood flows.

  12. Comparison of the Effectiveness of a Traditional Intermediate Algebra Course With That of a Less Rigorous Intermediate Algebra Course in Preparing Students for Success in a Subsequent Mathematics Course

    ERIC Educational Resources Information Center

    Sworder, Steven C.

    2007-01-01

    An experimental two-track intermediate algebra course was offered at Saddleback College, Mission Viejo, CA, between the Fall, 2002 and Fall, 2005 semesters. One track was modeled after the existing traditional California community college intermediate algebra course and the other track was a less rigorous intermediate algebra course in which the…

  13. Treatment of charge singularities in implicit solvent models.

    PubMed

    Geng, Weihua; Yu, Sining; Wei, Guowei

    2007-09-21

    This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2 A for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.

  14. Treatment of charge singularities in implicit solvent models

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Yu, Sining; Wei, Guowei

    2007-09-01

    This paper presents a novel method for solving the Poisson-Boltzmann (PB) equation based on a rigorous treatment of geometric singularities of the dielectric interface and a Green's function formulation of charge singularities. Geometric singularities, such as cusps and self-intersecting surfaces, in the dielectric interfaces are bottleneck in developing highly accurate PB solvers. Based on an advanced mathematical technique, the matched interface and boundary (MIB) method, we have recently developed a PB solver by rigorously enforcing the flux continuity conditions at the solvent-molecule interface where geometric singularities may occur. The resulting PB solver, denoted as MIBPB-II, is able to deliver second order accuracy for the molecular surfaces of proteins. However, when the mesh size approaches half of the van der Waals radius, the MIBPB-II cannot maintain its accuracy because the grid points that carry the interface information overlap with those that carry distributed singular charges. In the present Green's function formalism, the charge singularities are transformed into interface flux jump conditions, which are treated on an equal footing as the geometric singularities in our MIB framework. The resulting method, denoted as MIBPB-III, is able to provide highly accurate electrostatic potentials at a mesh as coarse as 1.2Å for proteins. Consequently, at a given level of accuracy, the MIBPB-III is about three times faster than the APBS, a recent multigrid PB solver. The MIBPB-III has been extensively validated by using analytically solvable problems, molecular surfaces of polyatomic systems, and 24 proteins. It provides reliable benchmark numerical solutions for the PB equation.

  15. Sandhoff Disease

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... virus-delivered gene therapy seen in an animal model of Tay-Sachs and Sandhoff diseases for use ...

  16. A large column analog experiment of stable isotope variations during reactive transport: I. A comprehensive model of sulfur cycling and δ34S fractionation

    NASA Astrophysics Data System (ADS)

    Druhan, Jennifer L.; Steefel, Carl I.; Conrad, Mark E.; DePaolo, Donald J.

    2014-01-01

    This study demonstrates a mechanistic incorporation of the stable isotopes of sulfur within the CrunchFlow reactive transport code to model the range of microbially-mediated redox processes affecting kinetic isotope fractionation. Previous numerical models of microbially mediated sulfate reduction using Monod-type rate expressions have lacked rigorous coupling of individual sulfur isotopologue rates, with the result that they cannot accurately simulate sulfur isotope fractionation over a wide range of substrate concentrations using a constant fractionation factor. Here, we derive a modified version of the dual-Monod or Michaelis-Menten formulation (Maggi and Riley, 2009, 2010) that successfully captures the behavior of the 32S and 34S isotopes over a broad range from high sulfate and organic carbon availability to substrate limitation using a constant fractionation factor. The new model developments are used to simulate a large-scale column study designed to replicate field scale conditions of an organic carbon (acetate) amended biostimulation experiment at the Old Rifle site in western Colorado. Results demonstrate an initial period of iron reduction that transitions to sulfate reduction, in agreement with field-scale behavior observed at the Old Rifle site. At the height of sulfate reduction, effluent sulfate concentrations decreased to 0.5 mM from an influent value of 8.8 mM over the 100 cm flow path, and thus were enriched in sulfate δ34S from 6.3‰ to 39.5‰. The reactive transport model accurately reproduced the measured enrichment in δ34S of both the reactant (sulfate) and product (sulfide) species of the reduction reaction using a single fractionation factor of 0.987 obtained independently from field-scale measurements. The model also accurately simulated the accumulation and δ34S signature of solid phase elemental sulfur over the duration of the experiment, providing a new tool to predict the isotopic signatures associated with reduced mineral pools. To our knowledge, this is the first rigorous treatment of sulfur isotope fractionation subject to Monod kinetics in a mechanistic reactive transport model that considers the isotopic spatial distribution of both dissolved and solid phase sulfur species during microbially-mediated sulfate reduction. describe the design and results of the large-scale column experiment; demonstrate incorporation of the stable isotopes of sulfur in a dual-Monod kinetic expression such that fractionation is accurately modeled at both high and low substrate availability; verify accurate simulation of the chemical and isotopic gradients in reactant and product sulfur species using a kinetic fractionation factor obtained from field-scale analysis (Druhan et al., 2012); utilize the model to predict the final δ34S values of secondary sulfur minerals accumulated in the sediment over the course of the experiment. The development of rigorous isotope-specific Monod-type rate expressions are presented here in application to sulfur cycling during amended biostimulation, but are readily applicable to a variety of stable isotope systems associated with both steady state and transient biogenic redox environments. In other words, the association of this model with a uranium remediation experiment does not limit its applicability to more general redox systems. Furthermore, the ability of this model treatment to predict the isotopic composition of secondary minerals accumulated as a result of fractionating processes (item 4) offers an important means of interpreting solid phase isotopic compositions and tracking long-term stability of precipitates.

  17. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  18. Wernicke-Korsakoff Syndrome

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... modulation of certain nerve cells in a rodent model of amnesia produced by by thiamine deficiency. The ...

  19. Interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations.

    PubMed

    Simic, Vladimir

    2016-06-01

    As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Fast synthesis of topographic mask effects based on rigorous solutions

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; Deng, Zhijie; Shiely, James

    2007-10-01

    Topographic mask effects can no longer be ignored at technology nodes of 45 nm, 32 nm and beyond. As feature sizes become comparable to the mask topographic dimensions and the exposure wavelength, the popular thin mask model breaks down, because the mask transmission no longer follows the layout. A reliable mask transmission function has to be derived from Maxwell equations. Unfortunately, rigorous solutions of Maxwell equations are only manageable for limited field sizes, but impractical for full-chip optical proximity corrections (OPC) due to the prohibitive runtime. Approximation algorithms are in demand to achieve a balance between acceptable computation time and tolerable errors. In this paper, a fast algorithm is proposed and demonstrated to model topographic mask effects for OPC applications. The ProGen Topographic Mask (POTOMAC) model synthesizes the mask transmission functions out of small-sized Maxwell solutions from a finite-difference-in-time-domain (FDTD) engine, an industry leading rigorous simulator of topographic mask effect from SOLID-E. The integral framework presents a seamless solution to the end user. Preliminary results indicate the overhead introduced by POTOMAC is contained within the same order of magnitude in comparison to the thin mask approach.

  1. A METHODOLOGY FOR ESTIMATING UNCERTAINTY OF A DISTRIBUTED HYDROLOGIC MODEL: APPLICATION TO POCONO CREEK WATERSHED

    EPA Science Inventory

    Utility of distributed hydrologic and water quality models for watershed management and sustainability studies should be accompanied by rigorous model uncertainty analysis. However, the use of complex watershed models primarily follows the traditional {calibrate/validate/predict}...

  2. Mechanical properties of frog skeletal muscles in iodoacetic acid rigor.

    PubMed Central

    Mulvany, M J

    1975-01-01

    1. Methods have been developed for describing the length: tension characteristics of frog skeletal muscles which go into rigor at 4 degrees C following iodoacetic acid poisoning either in the presence of Ca2+ (Ca-rigor) or its absence (Ca-free-rigor). 2. Such rigor muscles showed less resistance to slow stretch (slow rigor resistance) that to fast stretch (fast rigor resistance). The slow and fast rigor resistances of Ca-free-rigor muscles were much lower than those of Ca-rigor muscles. 3. The slow rigor resistance of Ca-rigor muscles was proportional to the amount of overlap between the contractile filaments present when the muscles were put into rigor. 4. Withdrawing Ca2+ from Ca-rigor muscles (induced-Ca-free rigor) reduced their slow and fast rigor resistances. Readdition of Ca2+ (but not Mg2+, Mn2+ or Sr2+) reversed the effect. 5. The slow and fast rigor resistances of Ca-rigor muscles (but not of Ca-free-rigor muscles) decreased with time. 6.The sarcomere structure of Ca-rigor and induced-Ca-free rigor muscles stretched by 0.2lo was destroyed in proportion to the amount of stretch, but the lengths of the remaining intact sarcomeres were essentially unchanged. This suggests that there had been a successive yielding of the weakeast sarcomeres. 7. The difference between the slow and fast rigor resistance and the effect of calcium on these resistances are discussed in relation to possible variations in the strength of crossbridges between the thick and thin filaments. Images Plate 1 Plate 2 PMID:1082023

  3. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highlymore » detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.« less

  4. Periodic waves in fiber Bragg gratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, K. W.; Merhasin, Ilya M.; Malomed, Boris A.

    2008-02-15

    We construct two families of exact periodic solutions to the standard model of fiber Bragg grating (FBG) with Kerr nonlinearity. The solutions are named ''sn'' and ''cn'' waves, according to the elliptic functions used in their analytical representation. The sn wave exists only inside the FBG's spectral bandgap, while waves of the cn type may only exist at negative frequencies ({omega}<0), both inside and outside the bandgap. In the long-wave limit, the sn and cn families recover, respectively, the ordinary gap solitons, and (unstable) antidark and dark solitons. Stability of the periodic solutions is checked by direct numerical simulations and,more » in the case of the sn family, also through the calculation of instability growth rates for small perturbations. Although, rigorously speaking, all periodic solutions are unstable, a subfamily of practically stable sn waves, with a sufficiently large spatial period and {omega}>0, is identified. However, the sn waves with {omega}<0, as well as all cn solutions, are strongly unstable.« less

  5. Vector spherical quasi-Gaussian vortex beams

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2014-02-01

    Model equations for describing and efficiently computing the radiation profiles of tightly spherically focused higher-order electromagnetic beams of vortex nature are derived stemming from a vectorial analysis with the complex-source-point method. This solution, termed as a high-order quasi-Gaussian (qG) vortex beam, exactly satisfies the vector Helmholtz and Maxwell's equations. It is characterized by a nonzero integer degree and order (n,m), respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and an azimuthal phase dependency in the form of a complex exponential corresponding to a vortex beam. An attractive feature of the high-order solution is the rigorous description of strongly focused (or strongly divergent) vortex wave fields without the need of either the higher-order corrections or the numerically intensive methods. Closed-form expressions and computational results illustrate the analysis and some properties of the high-order qG vortex beams based on the axial and transverse polarization schemes of the vector potentials with emphasis on the beam waist.

  6. Capacity planning of a wide-sense nonblocking generalized survivable network

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2006-06-01

    Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.

  7. The Need and Keys for a New Generation Network Adjustment Software

    NASA Astrophysics Data System (ADS)

    Colomina, I.; Blázquez, M.; Navarro, J. A.; Sastre, J.

    2012-07-01

    Orientation and calibration of photogrammetric and remote sensing instruments is a fundamental capacity of current mapping systems and a fundamental research topic. Neither digital remote sensing acquisition systems nor direct orientation gear, like INS and GNSS technologies, made block adjustment obsolete. On the contrary, the continuous flow of new primary data acquisition systems has challenged the capacity of the legacy block adjustment systems - in general network adjustment systems - in many aspects: extensibility, genericity, portability, large data sets capacity, metadata support and many others. In this article, we concentrate on the extensibility and genericity challenges that current and future network systems shall face. For this purpose we propose a number of software design strategies with emphasis on rigorous abstract modeling that help in achieving simplicity, genericity and extensibility together with the protection of intellectual proper rights in a flexible manner. We illustrate our suggestions with the general design approach of GENA, the generic extensible network adjustment system of GeoNumerics.

  8. Searching for Unresolved Binary Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Albretsen, Jacob; Stephens, Denise

    2007-10-01

    There are currently L and T brown dwarfs (BDs) with errors in their classification of +/- 1 to 2 spectra types. Metallicity and gravitational differences have accounted for some of these discrepancies, and recent studies have shown unresolved binary BDs may offer some explanation as well. However limitations in technology and resources often make it difficult to clearly resolve an object that may be binary in nature. Stephens and Noll (2006) identified statistically strong binary source candidates from Hubble Space Telescope (HST) images of Trans-Neptunian Objects (TNOs) that were apparently unresolved using model point-spread functions for single and binary sources. The HST archive contains numerous observations of BDs using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) that have never been rigorously analyzed for binary properties. Using methods developed by Stephens and Noll (2006), BD observations from the HST data archive are being analyzed for possible unresolved binaries. Preliminary results will be presented. This technique will identify potential candidates for future observations to determine orbital information.

  9. Predictability in Cellular Automata

    PubMed Central

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case. PMID:25271778

  10. Theory of chaotic orbital variations confirmed by Cretaceous geological evidence

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Meyers, Stephen R.; Sageman, Bradley B.

    2017-02-01

    Variations in the Earth’s orbit and spin vector are a primary control on insolation and climate; their recognition in the geological record has revolutionized our understanding of palaeoclimate dynamics, and has catalysed improvements in the accuracy and precision of the geological timescale. Yet the secular evolution of the planetary orbits beyond 50 million years ago remains highly uncertain, and the chaotic dynamical nature of the Solar System predicted by theoretical models has yet to be rigorously confirmed by well constrained (radioisotopically calibrated and anchored) geological data. Here we present geological evidence for a chaotic resonance transition associated with interactions between the orbits of Mars and the Earth, using an integrated radioisotopic and astronomical timescale from the Cretaceous Western Interior Basin of what is now North America. This analysis confirms the predicted chaotic dynamical behaviour of the Solar System, and provides a constraint for refining numerical solutions for insolation, which will enable a more precise and accurate geological timescale to be produced.

  11. Modeling of composite beams and plates for static and dynamic analysis

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Sutyrin, Vladislav G.; Lee, Bok Woo

    1993-01-01

    The main purpose of this research was to develop a rigorous theory and corresponding computational algorithms for through-the-thickness analysis of composite plates. This type of analysis is needed in order to find the elastic stiffness constants for a plate and to post-process the resulting plate solution in order to find approximate three-dimensional displacement, strain, and stress distributions throughout the plate. This also requires the development of finite deformation plate equations which are compatible with the through-the-thickness analyses. After about one year's work, we settled on the variational-asymptotical method (VAM) as a suitable framework in which to solve these types of problems. VAM was applied to laminated plates with constant thickness in the work of Atilgan and Hodges. The corresponding geometrically nonlinear global deformation analysis of plates was developed by Hodges, Atilgan, and Danielson. A different application of VAM, along with numerical results, was obtained by Hodges, Lee, and Atilgan. An expanded version of this last paper was submitted for publication in the AIAA Journal.

  12. Development of a model of space station solar array

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.

    1990-01-01

    Space structures, such as the space station solar arrays, must be extremely lightweight, flexible structures. Accurate prediction of the natural frequencies and mode shapes is essential for determining the structural adequacy of components, and designing a control system. The tension preload in the blanket of photovoltaic solar collectors, and the free/free boundary conditions of a structure in space, causes serious reservations on the use of standard finite element techniques of solution. In particular, a phenomena known as grounding, or false stiffening, of the stiffness matrix occurs during rigid body rotation. The grounding phenomena is examined in detail. Numerous stiffness matrices developed by others are examined for rigid body rotation capability, and found lacking. Various techniques are used for developing new stiffness matrices from the rigorous solutions of the differential equations, including the solution of the directed force problem. A new directed force stiffness matrix developed by the author provides all the rigid body capabilities for the beam in space.

  13. Benford’s Distribution in Complex Networks

    PubMed Central

    Morzy, Mikołaj; Kajdanowicz, Tomasz; Szymański, Bolesław K.

    2016-01-01

    Many collections of numbers do not have a uniform distribution of the leading digit, but conform to a very particular pattern known as Benford’s distribution. This distribution has been found in numerous areas such as accounting data, voting registers, census data, and even in natural phenomena. Recently it has been reported that Benford’s law applies to online social networks. Here we introduce a set of rigorous tests for adherence to Benford’s law and apply it to verification of this claim, extending the scope of the experiment to various complex networks and to artificial networks created by several popular generative models. Our findings are that neither for real nor for artificial networks there is sufficient evidence for common conformity of network structural properties with Benford’s distribution. We find very weak evidence suggesting that three measures, degree centrality, betweenness centrality and local clustering coefficient, could adhere to Benford’s law for scalefree networks but only for very narrow range of their parameters. PMID:27748398

  14. Theory of chaotic orbital variations confirmed by Cretaceous geological evidence.

    PubMed

    Ma, Chao; Meyers, Stephen R; Sageman, Bradley B

    2017-02-22

    Variations in the Earth's orbit and spin vector are a primary control on insolation and climate; their recognition in the geological record has revolutionized our understanding of palaeoclimate dynamics, and has catalysed improvements in the accuracy and precision of the geological timescale. Yet the secular evolution of the planetary orbits beyond 50 million years ago remains highly uncertain, and the chaotic dynamical nature of the Solar System predicted by theoretical models has yet to be rigorously confirmed by well constrained (radioisotopically calibrated and anchored) geological data. Here we present geological evidence for a chaotic resonance transition associated with interactions between the orbits of Mars and the Earth, using an integrated radioisotopic and astronomical timescale from the Cretaceous Western Interior Basin of what is now North America. This analysis confirms the predicted chaotic dynamical behaviour of the Solar System, and provides a constraint for refining numerical solutions for insolation, which will enable a more precise and accurate geological timescale to be produced.

  15. What is the impact of different VLBI analysis setups of the tropospheric delay on precipitable water vapor trends?

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Nilsson, Tobias; Heinkelmann, Robert; Glaser, Susanne; Zus, Florian; Deng, Zhiguo; Schuh, Harald

    2017-04-01

    The quality of the parameters estimated by global navigation satellite systems (GNSS) and very long baseline interferometry (VLBI) are distorted by erroneous meteorological observations applied to model the propagation delay in the electrically neutral atmosphere. For early VLBI sessions with poor geometry, unsuitable constraints imposed on the a priori tropospheric gradients is a source of additional hassle of VLBI analysis. Therefore, climate change indicators deduced from the geodetic analysis, such as the long-term precipitable water vapor (PWV) trends, are strongly affected. In this contribution we investigate the impact of different modeling and parameterization of the propagation delay in the troposphere on the estimates of long-term PWV trends from geodetic VLBI analysis results. We address the influence of the meteorological data source, and of the a priori non-hydrostatic delays and gradients employed in the VLBI processing, on the estimated PWV trends. In particular, we assess the effect of employing temperature and pressure from (i) homogenized in situ observations, (ii) the model levels of the ERA Interim reanalysis numerical weather model and (iii) our own blind model in the style of GPT2w with enhanced parameterization, calculated using the latter data set. Furthermore, we utilize non-hydrostatic delays and gradients estimated from (i) a GNSS reprocessing at GeoForschungsZentrum Potsdam, rigorously considering tropospheric ties, and (ii)) direct ray-tracing through ERA Interim, as additional observations. To evaluate the above, the least-squares module of the VieVS@GFZ VLBI software was appropriately modified. Additionally, we study the noise characteristics of the non-hydrostatic delays and gradients estimated from our VLBI and GNSS analyses as well as from ray-tracing. We have modified the Theil-Sen estimator appropriately to robustly deduce PWV trends from VLBI, GNSS, ray-tracing and direct numerical integration in ERA Interim. We disseminate all our solutions in the latest Tropo-SINEX format.

  16. A planning algorithm for quantifying decentralised water management opportunities in urban environments.

    PubMed

    Bach, Peter M; McCarthy, David T; Urich, Christian; Sitzenfrei, Robert; Kleidorfer, Manfred; Rauch, Wolfgang; Deletic, Ana

    2013-01-01

    With global change bringing about greater challenges for the resilient planning and management of urban water infrastructure, research has been invested in the development of a strategic planning tool, DAnCE4Water. The tool models how urban and societal changes impact the development of centralised and decentralised (distributed) water infrastructure. An algorithm for rigorous assessment of suitable decentralised stormwater management options in the model is presented and tested on a local Melbourne catchment. Following detailed spatial representation algorithms (defined by planning rules), the model assesses numerous stormwater options to meet water quality targets at a variety of spatial scales. A multi-criteria assessment algorithm is used to find top-ranking solutions (which meet a specific treatment performance for a user-defined percentage of catchment imperviousness). A toolbox of five stormwater technologies (infiltration systems, surface wetlands, bioretention systems, ponds and swales) is featured. Parameters that set the algorithm's flexibility to develop possible management options are assessed and evaluated. Results are expressed in terms of 'utilisation', which characterises the frequency of use of different technologies across the top-ranking options (bioretention being the most versatile). Initial results highlight the importance of selecting a suitable spatial resolution and providing the model with enough flexibility for coming up with different technology combinations. The generic nature of the model enables its application to other urban areas (e.g. different catchments, local municipal regions or entire cities).

  17. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pireddu, Marina, E-mail: marina.pireddu@unimib.it

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizesmore » its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called “Stretching Along the Paths” technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.« less

  18. Equations of Interdoublet Separation during Flagella Motion Reveal Mechanisms of Wave Propagation and Instability

    PubMed Central

    Bayly, Philip V.; Wilson, Kate S.

    2014-01-01

    The motion of flagella and cilia arises from the coordinated activity of dynein motor protein molecules arrayed along microtubule doublets that span the length of axoneme (the flagellar cytoskeleton). Dynein activity causes relative sliding between the doublets, which generates propulsive bending of the flagellum. The mechanism of dynein coordination remains incompletely understood, although it has been the focus of many studies, both theoretical and experimental. In one leading hypothesis, known as the geometric clutch (GC) model, local dynein activity is thought to be controlled by interdoublet separation. The GC model has been implemented as a numerical simulation in which the behavior of a discrete set of rigid links in viscous fluid, driven by active elements, was approximated using a simplified time-marching scheme. A continuum mechanical model and associated partial differential equations of the GC model have remained lacking. Such equations would provide insight into the underlying biophysics, enable mathematical analysis of the behavior, and facilitate rigorous comparison to other models. In this article, the equations of motion for the flagellum and its doublets are derived from mechanical equilibrium principles and simple constitutive models. These equations are analyzed to reveal mechanisms of wave propagation and instability in the GC model. With parameter values in the range expected for Chlamydomonas flagella, solutions to the fully nonlinear equations closely resemble observed waveforms. These results support the ability of the GC hypothesis to explain dynein coordination in flagella and provide a mathematical foundation for comparison to other leading models. PMID:25296329

  19. A Biome map for Modelling Global Mid-Pliocene Climate Change

    NASA Astrophysics Data System (ADS)

    Salzmann, U.; Haywood, A. M.

    2006-12-01

    The importance of vegetation-climate feedbacks was highlighted by several paleo-climate modelling exercises but their role as a boundary condition in Tertiary modelling has not been fully recognised or explored. Several paleo-vegetation datasets and maps have been produced for specific time slabs or regions for the Tertiary, but the vegetation classifications that have been used differ, thus making meaningful comparisons difficult. In order to facilitate further investigations into Tertiary climate and environmental change we are presently implementing the comprehensive GIS database TEVIS (Tertiary Environment and Vegetation Information System). TEVIS integrates marine and terrestrial vegetation data, taken from fossil pollen, leaf or wood, into an internally consistent classification scheme to produce for different time slabs global Tertiary Biome and Mega- Biome maps (Harrison & Prentice, 2003). In the frame of our ongoing 5-year programme we present a first global vegetation map for the mid-Pliocene time slab, a period of sustained global warmth. Data were synthesised from the PRISM data set (Thompson and Fleming 1996) after translating them to the Biome classification scheme and from new literature. The outcomes of the Biome map are compared with modelling results using an advanced numerical general circulation model (HadAM3) and the BIOME 4 vegetation model. Our combined proxy data and modelling approach will provide new palaeoclimate datasets to test models that are used to predict future climate change, and provide a more rigorous picture of climate and environmental changes during the Neogene.

  20. Numerical study of wave propagation around an underground cavity: acoustic case

    NASA Astrophysics Data System (ADS)

    Esterhazy, Sofi; Perugia, Ilaria; Schöberl, Joachim; Bokelmann, Götz

    2015-04-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explosion/weapon testing, we aim to provide a basic numerical study of the wave propagation around and inside such an underground cavity. The aim of the CTBTO is to ban all nuclear explosions of any size anywhere, by anyone. Therefore, it is essential to build a powerful strategy to efficiently investigate and detect critical signatures such as gas filled cavities, rubble zones and fracture networks below the surface. One method to investigate the geophysical properties of an underground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as 'resonance seismometry' - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and there are also only few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in the mathematical understanding of the underlying physical phenomena. Here, we focus our numerical study on the propagation of P-waves in two dimensions. An extension to three dimensions as well as an inclusion of the full elastic wave field is planned in the following. For the numerical simulations of wave propagation we use a high order finite element discretization which has the significant advantage that it can be extended easily from simple toy designs to complex and irregularly shaped geometries without excessive effort. Our computations are done with the parallel Finite Element Library NGSOLVE ontop of the automatic 2D/3D tetrahedral mesh generator NETGEN (http://sourceforge.net/projects/ngsolve/). Using the basic mathematical understanding of the physical equations and the numerical algorithms it is possible for us to investigate the wave field over a large bandwidth of wave numbers. This means we can apply our calculations for a wide range of parameters, while keeping the numerical error explicitly under control. The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  1. Generalized Cahn-Hilliard equation for solutions with drastically different diffusion coefficients. Application to exsolution in ternary feldspar

    NASA Astrophysics Data System (ADS)

    Petrishcheva, E.; Abart, R.

    2012-04-01

    We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.

  2. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    PubMed

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.

  3. On the structure of existence regions for sinks of the Hénon map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galias, Zbigniew, E-mail: galias@agh.edu.pl; Tucker, Warwick, E-mail: warwick@math.uu.se

    2014-03-15

    An extensive search for stable periodic orbits (sinks) for the Hénon map in a small neighborhood of the classical parameter values is carried out. Several parameter values which generate a sink are found and verified by rigorous numerical computations. Each found parameter value is extended to a larger region of existence using a simplex continuation method. The structure of these regions of existence is investigated. This study shows that for the Hénon map, there exist sinks close to the classical case.

  4. Space Shuttle Abort Evolution

    NASA Technical Reports Server (NTRS)

    Henderson, Edward M.; Nguyen, Tri X.

    2011-01-01

    This paper documents some of the evolutionary steps in developing a rigorous Space Shuttle launch abort capability. The paper addresses the abort strategy during the design and development and how it evolved during Shuttle flight operations. The Space Shuttle Program made numerous adjustments in both the flight hardware and software as the knowledge of the actual flight environment grew. When failures occurred, corrections and improvements were made to avoid a reoccurrence and to provide added capability for crew survival. Finally some lessons learned are summarized for future human launch vehicle designers to consider.

  5. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  6. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  7. The Thin Oil Film Equation

    NASA Technical Reports Server (NTRS)

    Brown, James L.; Naughton, Jonathan W.

    1999-01-01

    A thin film of oil on a surface responds primarily to the wall shear stress generated on that surface by a three-dimensional flow. The oil film is also subject to wall pressure gradients, surface tension effects and gravity. The partial differential equation governing the oil film flow is shown to be related to Burgers' equation. Analytical and numerical methods for solving the thin oil film equation are presented. A direct numerical solver is developed where the wall shear stress variation on the surface is known and which solves for the oil film thickness spatial and time variation on the surface. An inverse numerical solver is also developed where the oil film thickness spatial variation over the surface at two discrete times is known and which solves for the wall shear stress variation over the test surface. A One-Time-Level inverse solver is also demonstrated. The inverse numerical solver provides a mathematically rigorous basis for an improved form of a wall shear stress instrument suitable for application to complex three-dimensional flows. To demonstrate the complexity of flows for which these oil film methods are now suitable, extensive examination is accomplished for these analytical and numerical methods as applied to a thin oil film in the vicinity of a three-dimensional saddle of separation.

  8. A 3D Numerical Survey of Seismic Waves Inside and Around an Underground Cavity

    NASA Astrophysics Data System (ADS)

    Esterhazy, S.; Schneider, F. M.; Perugia, I.; Bokelmann, G.

    2016-12-01

    Motivated by the need to detect an underground cavity within the procedure of an On-Site-Inspection (OSI) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO), which might be caused by a nuclear explo- sion/weapon testing, we present our findings of a numerical study on the elastic wave propagation inside and around such an underground cavity.The aim of the CTBTO is to ban all nuclear explosions of any size anywhere, by anyone. Therefore, it is essential to build a powerful strategy to efficiently investigate and detect critical signatures such as gas filled cavities, rubble zones and fracture networks below the surface. One method to investigate the geophysical properties of an under- ground cavity allowed by the Comprehensive Nuclear-test Ban Treaty is referred to as "resonance seismometry" - a resonance method that uses passive or active seismic techniques, relying on seismic cavity vibrations. This method is in fact not yet entirely determined by the Treaty and there are also only few experimental examples that have been suitably documented to build a proper scientific groundwork. This motivates to investigate this problem on a purely numerical level and to simulate these events based on recent advances in the mathematical understanding of the underlying physical phenomena.Our numerical study includes the full elastic wave field in three dimensions. We consider the effects from an in- coming plane wave as well as point source located in the surrounding of the cavity at the surface. While the former can be considered as passive source like a tele-seismic earthquake, the latter represents a man-made explosion or a viborseis as used for/in active seismic techniques. For our simulations in 3D we use the discontinuous Galerkin Spectral Element Code SPEED developed by MOX (The Laboratory for Modeling and Scientific Computing, Department of Mathematics) and DICA (Department of Civil and Environmental Engineering) at the Politecnico di Milano. The computations are carried out on the Vienna Scientific Cluster (VSC).The accurate numerical modeling can facilitate the development of proper analysis techniques to detect the remnants of an underground nuclear test, help to set a rigorous scientific base of OSI and contribute to bringing the Treaty into force.

  9. A composite numerical model for assessing subsurface transport of oily wastes and chemical constituents

    NASA Astrophysics Data System (ADS)

    Panday, S.; Wu, Y. S.; Huyakorn, P. S.; Wade, S. C.; Saleem, Z. A.

    1997-02-01

    Subsurface fate and transport models are utilized to predict concentrations of chemicals leaching from wastes into downgradient receptor wells. The contaminant concentrations in groundwater provide a measure of the risk to human health and the environment. The level of potential risk is currently used by the U.S. Environmental Protection Agency to determine whether management of the wastes should conform to hazardous waste management standards. It is important that the transport and fate of contaminants is simulated realistically. Most models in common use are inappropriate for simulating the migration of wastes containing significant fractions of nonaqueous-phase liquids (NAPLs). The migration of NAPL and its dissolved constituents may not be reliably predicted using conventional aqueous-phase transport simulations. To overcome this deficiency, an efficient and robust regulatory assessment model incorporating multiphase flow and transport in the unsaturated and saturated zones of the subsurface environment has been developed. The proposed composite model takes into account all of the major transport processes including infiltration and ambient flow of NAPL, entrapment of residual NAPL, adsorption, volatilization, degradation, dissolution of chemical constituents, and transport by advection and hydrodynamic dispersion. Conceptually, the subsurface is treated as a composite unsaturated zone-saturated zone system. The composite simulator consists of three major interconnected computational modules representing the following components of the migration pathway: (1) vertical multiphase flow and transport in the unsaturated zone; (2) areal movement of the free-product lens in the saturated zone with vertical equilibrium; and (3) three-dimensional aqueous-phase transport of dissolved chemicals in ambient groundwater. Such a composite model configuration promotes computational efficiency and robustness (desirable for regulatory assessment applications). Two examples are presented to demonstrate the model verification and a site application. Simulation results obtained using the composite modeling approach are compared with a rigorous numerical solution and field observations of crude oil saturations and plume concentrations of total dissolved organic carbon at a spill site in Minnesota, U.S.A. These comparisons demonstrate the ability of the present model to provide realistic depiction of field-scale situations.

  10. Numerical and Experimental Approaches Toward Understanding Lava Flow Heat Transfer

    NASA Astrophysics Data System (ADS)

    Rumpf, M.; Fagents, S. A.; Hamilton, C.; Crawford, I. A.

    2013-12-01

    We have performed numerical modeling and experimental studies to quantify the heat transfer from a lava flow into an underlying particulate substrate. This project was initially motivated by a desire to understand the transfer of heat from a lava flow into the lunar regolith. Ancient regolith deposits that have been protected by a lava flow may contain ancient solar wind, solar flare, and galactic cosmic ray products that can give insight into the history of our solar system, provided the records were not heated and destroyed by the overlying lava flow. In addition, lava-substrate interaction is an important aspect of lava fluid dynamics that requires consideration in lava emplacement models Our numerical model determines the depth to which the heat pulse will penetrate beneath a lava flow into the underlying substrate. Rigorous treatment of the temperature dependence of lava and substrate thermal conductivity and specific heat capacity, density, and latent heat release are imperative to an accurate model. Experiments were conducted to verify the numerical model. Experimental containers with interior dimensions of 20 x 20 x 25 cm were constructed from 1 inch thick calcium silicate sheeting. For initial experiments, boxes were packed with lunar regolith simulant (GSC-1) to a depth of 15 cm with thermocouples embedded at regular intervals. Basalt collected at Kilauea Volcano, HI, was melted in a gas forge and poured directly onto the simulant. Initial lava temperatures ranged from ~1200 to 1300 °C. The system was allowed to cool while internal temperatures were monitored by a thermocouple array and external temperatures were monitored by a Forward Looking Infrared (FLIR) video camera. Numerical simulations of the experiments elucidate the details of lava latent heat release and constrain the temperature-dependence of the thermal conductivity of the particulate substrate. The temperature-dependence of thermal conductivity of particulate material is not well known, especially at high temperatures. It is important to have this property well constrained as substrate thermal conductivity is the greatest influence on the rate of lava-substrate heat transfer. At Kilauea and Mauna Loa Volcanoes, Hawaii, and other volcanoes that threaten communities, lava may erupt over a variety of substrate materials including cool lava flows, volcanic tephra, soils, sand, and concrete. The composition, moisture, organic content, porosity, and grain size of the substrate dictate the thermophysical properties, thus affecting the transfer of heat from the lava flow into the substrate and flow mobility. Particulate substrate materials act as insulators, subduing the rate of heat transfer from the flow core. Therefore, lava that flows over a particulate substrate will maintain higher core temperatures over a longer period, enhancing flow mobility and increasing the duration and aerial coverage of the resulting flow. Lava flow prediction models should include substrate specification with temperature dependent material property definitions for an accurate understanding of flow hazards.

  11. The origin of spurious solutions in computational electromagnetics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.

    1995-01-01

    The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.

  12. A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Zhang, Guoyu; Huang, Chengming; Li, Meng

    2018-04-01

    We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.

  13. Expressions of the fundamental equation of gradient elution and a numerical solution of these equations under any gradient profile.

    PubMed

    Nikitas, P; Pappa-Louisi, A

    2005-09-01

    The original work carried out by Freiling and Drake in gradient liquid chromatography is rewritten in the current language of reversed-phase liquid chromatography. This allows for the rigorous derivation of the fundamental equation for gradient elution and the development of two alternative expressions of this equation, one of which is free from the constraint that the holdup time must be constant. In addition, the above derivation results in a very simple numerical solution of the various equations of gradient elution under any gradient profile. The theory was tested using eight catechol-related solutes in mobile phases modified with methanol, acetonitrile, or 2-propanol. It was found to be a satisfactory prediction of solute gradient retention behavior even if we used a simple linear description for the isocratic elution of these solutes.

  14. Agricultural model intercomparison and improvement project: Overview of model intercomparisons

    USDA-ARS?s Scientific Manuscript database

    Improvement of crop simulation models to better estimate growth and yield is one of the objectives of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The overall goal of AgMIP is to provide an assessment of crop model through rigorous intercomparisons and evaluate future clim...

  15. Modeling elasto-viscoplasticity in a consistent phase field framework

    DOE PAGES

    Cheng, Tian -Le; Wen, You -Hai; Hawk, Jeffrey A.

    2017-05-19

    Existing continuum level phase field plasticity theories seek to solve plastic strain by minimizing the shear strain energy. However, rigorously speaking, for thermodynamic consistency it is required to minimize the total strain energy unless there is proof that hydrostatic strain energy is independent of plastic strain which is unfortunately absent. In this work, we extend the phase-field microelasticity theory of Khachaturyan et al. by minimizing the total elastic energy with constraint of incompressibility of plastic strain. We show that the flow rules derived from the Ginzburg-Landau type kinetic equation can be in line with Odqvist's law for viscoplasticity and Prandtl-Reussmore » theory. Free surfaces (external surfaces or internal cracks/voids) are treated in the model. Deformation caused by a misfitting spherical precipitate in an elasto-plastic matrix is studied by large-scale three-dimensional simulations in four different regimes in terms of the matrix: (a) elasto-perfectly-plastic, (b) elastoplastic with linear hardening, (c) elastoplastic with power-law hardening, and (d) elasto-perfectly-plastic with a free surface. The results are compared with analytical/numerical solutions of Lee et al. for (a-c) and analytical solution derived in this work for (d). Additionally, the J integral of a fixed crack is calculated in the phase-field model and discussed in the context of fracture mechanics.« less

  16. Modeling elasto-viscoplasticity in a consistent phase field framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Tian -Le; Wen, You -Hai; Hawk, Jeffrey A.

    Existing continuum level phase field plasticity theories seek to solve plastic strain by minimizing the shear strain energy. However, rigorously speaking, for thermodynamic consistency it is required to minimize the total strain energy unless there is proof that hydrostatic strain energy is independent of plastic strain which is unfortunately absent. In this work, we extend the phase-field microelasticity theory of Khachaturyan et al. by minimizing the total elastic energy with constraint of incompressibility of plastic strain. We show that the flow rules derived from the Ginzburg-Landau type kinetic equation can be in line with Odqvist's law for viscoplasticity and Prandtl-Reussmore » theory. Free surfaces (external surfaces or internal cracks/voids) are treated in the model. Deformation caused by a misfitting spherical precipitate in an elasto-plastic matrix is studied by large-scale three-dimensional simulations in four different regimes in terms of the matrix: (a) elasto-perfectly-plastic, (b) elastoplastic with linear hardening, (c) elastoplastic with power-law hardening, and (d) elasto-perfectly-plastic with a free surface. The results are compared with analytical/numerical solutions of Lee et al. for (a-c) and analytical solution derived in this work for (d). Additionally, the J integral of a fixed crack is calculated in the phase-field model and discussed in the context of fracture mechanics.« less

  17. Effects of non-condensable gas on the dynamic oscillations of cavitation bubbles

    NASA Astrophysics Data System (ADS)

    Zhang, Yuning

    2016-11-01

    Cavitation is an essential topic of multiphase flow with a broad range of applications. Generally, there exists non-condensable gas in the liquid and a complex vapor/gas mixture bubble will be formed. A rigorous prediction of the dynamic behavior of the aforementioned mixture bubble is essential for the development of a complete cavitation model. In the present paper, effects of non-condensable gas on the dynamic oscillations of the vapor/gas mixture bubble are numerically investigated in great detail. For the completeness, a large parameter zone (e.g. bubble radius, frequency and ratio between gas and vapor) is investigated with many demonstrating examples. The mechanisms of mass diffusion are categorized into different groups with their characteristics and dominated regions given. Influences of non-condensable gas on the wave propagation (e.g. wave speed and attenuation) in the bubbly liquids are also briefly discussed. Specifically, the minimum wave speed is quantitatively predicted in order to close the pressure-density coupling relationship usually employed for the cavitation modelling. Finally, the application of the present finding on the development of cavitation model is demonstrated with a brief discussion of its influence on the cavitation dynamics. This work was financially supported by the National Natural Science Foundation of China (Project No.: 51506051).

  18. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-21

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  19. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  20. A new flux-conserving numerical scheme for the steady, incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    1994-01-01

    This paper is concerned with the continued development of a new numerical method, the space-time solution element (STS) method, for solving conservation laws. The present work focuses on the two-dimensional, steady, incompressible Navier-Stokes equations. Using first an integral approach, and then a differential approach, the discrete flux conservation equations presented in a recent paper are rederived. Here a simpler method for determining the flux expressions at cell interfaces is given; a systematic and rigorous derivation of the conditions used to simulate the differential form of the governing conservation law(s) is provided; necessary and sufficient conditions for a discrete approximation to satisfy a conservation law in E2 are derived; and an estimate of the local truncation error is given. A specific scheme is then constructed for the solution of the thin airfoil boundary layer problem. Numerical results are presented which demonstrate the ability of the scheme to accurately resolve the developing boundary layer and wake regions using grids which are much coarser than those employed by other numerical methods. It is shown that ten cells in the cross-stream direction are sufficient to accurately resolve the developing airfoil boundary layer.

Top