Sample records for numerical model called

  1. Pricing index-based catastrophe bonds: Part 1: Formulation and discretization issues using a numerical PDE approach

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the first installment in a two-part series, and focuses on the development of a numerical PDE approach to price components of a Bermudan-style callable catastrophe (CAT) bond. The bond is based on two underlying stochastic variables; the PCS index which posts quarterly estimates of industry-wide hurricane losses as well as a single-factor CIR interest rate model for the three-month LIBOR. The aggregate PCS index is analogous to losses claimed under traditional reinsurance in that it is used to specify a reinsurance layer. The proposed CAT bond model contains a Bermudan-style call feature designed to allow the reinsurer to minimize their interest rate risk exposure on making substantial fixed coupon payments using capital from the reinsurance premium. Numerical PDE methods are the fundamental strategy for pricing early-exercise constraints, such as the Bermudan-style call feature, into contingent claim models. Therefore, the objective and unique contribution of this first installment in the two-part series is to develop a formulation and discretization strategy for the proposed CAT bond model utilizing a numerical PDE approach. Object-oriented code design is fundamental to the numerical methods used to aggregate the PCS index, and implement the call feature. Therefore, object-oriented design issues that relate specifically to the development of a numerical PDE approach for the component of the proposed CAT bond model that depends on the PCS index and LIBOR are described here. Formulation, numerical methods and code design issues that relate to aggregating the PCS index and introducing the call option are the subject of the companion paper.

  2. Flexible Environmental Modeling with Python and Open - GIS

    NASA Astrophysics Data System (ADS)

    Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann

    2015-04-01

    Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.

  3. Features in simulation of crystal growth using the hyperbolic PFC equation and the dependence of the numerical solution on the parameters of the computational grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starodumov, Ilya; Kropotin, Nikolai

    2016-08-10

    We investigate the three-dimensional mathematical model of crystal growth called PFC (Phase Field Crystal) in a hyperbolic modification. This model is also called the modified model PFC (originally PFC model is formulated in parabolic form) and allows to describe both slow and rapid crystallization processes on atomic length scales and on diffusive time scales. Modified PFC model is described by the differential equation in partial derivatives of the sixth order in space and second order in time. The solution of this equation is possible only by numerical methods. Previously, authors created the software package for the solution of the Phasemore » Field Crystal problem, based on the method of isogeometric analysis (IGA) and PetIGA program library. During further investigation it was found that the quality of the solution can strongly depends on the discretization parameters of a numerical method. In this report, we show the features that should be taken into account during constructing the computational grid for the numerical simulation.« less

  4. Documentation for the MODFLOW 6 framework

    USGS Publications Warehouse

    Hughes, Joseph D.; Langevin, Christian D.; Banta, Edward R.

    2017-08-10

    MODFLOW is a popular open-source groundwater flow model distributed by the U.S. Geological Survey. Growing interest in surface and groundwater interactions, local refinement with nested and unstructured grids, karst groundwater flow, solute transport, and saltwater intrusion, has led to the development of numerous MODFLOW versions. Often times, there are incompatibilities between these different MODFLOW versions. The report describes a new MODFLOW framework called MODFLOW 6 that is designed to support multiple models and multiple types of models. The framework is written in Fortran using a modular object-oriented design. The primary framework components include the simulation (or main program), Timing Module, Solutions, Models, Exchanges, and Utilities. The first version of the framework focuses on numerical solutions, numerical models, and numerical exchanges. This focus on numerical models allows multiple numerical models to be tightly coupled at the matrix level.

  5. ICT-Mediated Science Inquiry: The Remote Access Microscopy Project (RAMP)

    ERIC Educational Resources Information Center

    Hunt, John

    2007-01-01

    The calls for the transformation of how science is taught (and what is taught) are numerous and show no sign of abating. Common amongst these calls is the need to shift from the traditional teaching and learning towards a model that represents the social constructivist epistemology. These calls have coincided with the Internet revolution. Through…

  6. Numerical simulations for tumor and cellular immune system interactions in lung cancer treatment

    NASA Astrophysics Data System (ADS)

    Kolev, M.; Nawrocki, S.; Zubik-Kowal, B.

    2013-06-01

    We investigate a new mathematical model that describes lung cancer regression in patients treated by chemotherapy and radiotherapy. The model is composed of nonlinear integro-differential equations derived from the so-called kinetic theory for active particles and a new sink function is investigated according to clinical data from carcinoma planoepitheliale. The model equations are solved numerically and the data are utilized in order to find their unknown parameters. The results of the numerical experiments show a good correlation between the predicted and clinical data and illustrate that the mathematical model has potential to describe lung cancer regression.

  7. Numerical modeling of a nonmonotonic separation hydrocyclone curve

    NASA Astrophysics Data System (ADS)

    Min'kov, L. L.; Dueck, J. H.

    2012-11-01

    In the context of the mechanics of interpenetrating continua, numerical modeling of separation of a polydisperse suspension in a hydrocyclone is carried out. The so-called "mixture model" valid for a low volume fraction of particles and low Stokes numbers is used for description of the suspension and particle motion. It is shown that account taken of the interaction between large and small particles can explain the nonmonotonic behavior of the separation curve.

  8. A Case of Reform: The Undergraduate Research Collaboratives

    ERIC Educational Resources Information Center

    Horsch, Elizabeth; St. John, Mark; Christensen, Ronald L.

    2012-01-01

    Despite numerous calls for reform, the early chemistry experience for most college students has remained unchanged for decades. In 2004 the National Science Foundation (NSF) issued a call for proposals to create new models of chemical education that would infuse authentic research into the early stages of a student's college experience. Under this…

  9. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method.

    PubMed

    de Vries, Martinus P; Hamburg, Marc C; Schutte, Harm K; Verkerke, Gijsbertus J; Veldman, Arthur E P

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  10. Age-of-Air, Tape Recorder, and Vertical Transport Schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.-J.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A numerical-analytic investigation of the impacts of vertical transport schemes on the model simulated age-of-air and the so-called 'tape recorder' will be presented using an idealized 1-D column transport model as well as a more realistic 3-D dynamical model. By comparing to the 'exact' solutions of 'age-of-air' and the 'tape recorder' obtainable in the 1-D setting, useful insight is gained on the impacts of numerical diffusion and dispersion of numerical schemes used in global models. Advantages and disadvantages of Eulerian, semi-Lagrangian, and Lagrangian transport schemes will be discussed. Vertical resolution requirement for numerical schemes as well as observing systems for capturing the fine details of the 'tape recorder' or any upward propagating wave-like structures can potentially be derived from the 1-D analytic model.

  11. Target & Propagation Models for the FINDER Radar

    NASA Technical Reports Server (NTRS)

    Cable, Vaughn; Lux, James; Haque, Salmon

    2013-01-01

    Finding persons still alive in piles of rubble following an earthquake, a severe storm, or other disaster is a difficult problem. JPL is currently developing a victim detection radar called FINDER (Finding Individuals in Emergency and Response). The subject of this paper is directed toward development of propagation & target models needed for simulation & testing of such a system. These models are both physical (real rubble piles) and numerical. Early results from the numerical modeling phase show spatial and temporal spreading characteristics when signals are passed through a randomly mixed rubble pile.

  12. Nonlinear Schrödinger approach to European option pricing

    NASA Astrophysics Data System (ADS)

    Wróblewski, Marcin

    2017-05-01

    This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.

  13. The Stability and Structure of Lean Hydrogen-Air Flames: Effects of Gravity

    DTIC Science & Technology

    1990-05-17

    INTRODUCTION ................................................................................................. 1 MULTIDIMENSIONAL FLAME MODEL ...combustion, molecular diffusion between the reactants, intermediates, and products, thermal conduction, convection, and gravity. Such a detailed model allows...instabil- ity, generally called the Rayleigh-Taylor instability5 . A numerical model of the premixed hydrogen flame that includes all the physical

  14. Influence of Installation Effects on Pile Bearing Capacity in Cohesive Soils - Large Deformation Analysis Via Finite Element Method

    NASA Astrophysics Data System (ADS)

    Konkol, Jakub; Bałachowski, Lech

    2017-03-01

    In this paper, the whole process of pile construction and performance during loading is modelled via large deformation finite element methods such as Coupled Eulerian Lagrangian (CEL) and Updated Lagrangian (UL). Numerical study consists of installation process, consolidation phase and following pile static load test (SLT). The Poznań site is chosen as the reference location for the numerical analysis, where series of pile SLTs have been performed in highly overconsolidated clay (OCR ≈ 12). The results of numerical analysis are compared with corresponding field tests and with so-called "wish-in-place" numerical model of pile, where no installation effects are taken into account. The advantages of using large deformation numerical analysis are presented and its application to the pile designing is shown.

  15. Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model

    DTIC Science & Technology

    2014-09-19

    the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to

  16. Numerical solutions of the complete Navier-Strokes equations. no. 27

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1996-01-01

    This report describes the development of an enstrophy model capable of predicting turbulence separation and its application to two airfoils at various angles of attack and Mach numbers. In addition, a two equation kappa-xi model with a tensor eddy viscosity was developed. Plans call for this model to be used in calculating three dimensional turbulent flows.

  17. NAPL: SIMULATOR DOCUMENTATION

    EPA Science Inventory

    A mathematical and numerical model is developed to simulate the transport and fate of NAPLs (Non-Aqueous Phase Liquids) in near-surface granular soils. The resulting three-dimensional, three phase simulator is called NAPL. The simulator accommodates three mobile phases: water, NA...

  18. Tissue Paper Economics and Other Hidden Dimensions of the Studio Model of Art Instruction.

    ERIC Educational Resources Information Center

    Hamblen, Karen A.

    1983-01-01

    Despite calls for change and numerous proposed alternatives, art education remains committed to the studio model. The retention of the status quo may be related to the economics of art studio materials and especially to the extensive advertising of art supply companies in art teachers' journals. (Author/IS)

  19. Modeling of Water-Breathing Propulsion Systems Utilizing the Aluminum-Seawater Reaction and Solid-Oxide Fuel Cells

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of Document: MODELING OF WATER-BREATHING PROPULSION SYSTEMS UTILIZING THE ALUMINUM-SEAWATER REACTION AND SOLID...Hybrid Aluminum Combustor (HAC): a novel underwater power system based on the exothermic reaction of aluminum with seawater. The system is modeled ...using a NASA-developed framework called Numerical Propulsion System Simulation (NPSS) by assembling thermodynamic models developed for each component

  20. Numerical simulations of the charged-particle flow dynamics for sources with a curved emission surface

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.

    2016-12-01

    The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.

  1. Probability and Statistics in Sensor Performance Modeling

    DTIC Science & Technology

    2010-12-01

    language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of

  2. Examination of various turbulence models for application in liquid rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1991-01-01

    There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.

  3. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    NASA Astrophysics Data System (ADS)

    Reffray, G.; Bourdalle-Badie, R.; Calone, C.

    2014-08-01

    Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k + l from Blanke and Delecluse, 1993 and two equation models: Generic Lengh Scale closures from Umlauf and Burchard, 2003) are able to correctly reproduce the classical test of Kato and Phillips (1969) under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a one-year period (mid-2010 to mid-2011) at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between -2 and 2 °C during the stratified period (June to October). However the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D) is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA). This package is a good starting point for further investigation of vertical processes.

  4. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    NASA Astrophysics Data System (ADS)

    Reffray, G.; Bourdalle-Badie, R.; Calone, C.

    2015-01-01

    Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k+l from Blanke and Delecluse, 1993, and two equation models: generic length scale closures from Umlauf and Burchard, 2003) are able to correctly reproduce the classical test of Kato and Phillips (1969) under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a 1-year period (mid-2010 to mid-2011) at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between -2 and 2 °C during the stratified period (June to October). However, the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D) is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA). This package is a good starting point for further investigation of vertical processes.

  5. A Value-Added Approach to Selecting the Best Master of Business Administration (MBA) Program

    ERIC Educational Resources Information Center

    Fisher, Dorothy M.; Kiang, Melody; Fisher, Steven A.

    2007-01-01

    Although numerous studies rank master of business administration (MBA) programs, prospective students' selection of the best MBA program is a formidable task. In this study, the authors used a linear-programming-based model called data envelopment analysis (DEA) to evaluate MBA programs. The DEA model connects costs to benefits to evaluate the…

  6. Pseudo-Boltzmann model for modeling the junctionless transistors

    NASA Astrophysics Data System (ADS)

    Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.

    2014-05-01

    Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.

  7. NAPL: SIMULATOR DOCUMENTATION (EPA/600/SR-97/102)

    EPA Science Inventory

    A mathematical and numerical model is developed to simulate the transport and fate of NAPLs (Non-Aqueous Phase Liquids) in near-surface granular soils. The resulting three-dimensional, three phase simulator is called NAPL. The simulator accommodates three mobile phases: water, NA...

  8. Some Moral Dimensions of Administrative Theory and Practice.

    ERIC Educational Resources Information Center

    Raywid, Mary Anne

    1986-01-01

    Examines management approaches in ethical terms, arriving at numerous criteria applicable to educational administration. Discusses scientific management, morally neutral concepts, hyperrationalization, tightening of controls, and the business/industry model as having eclipsed or confused the moral dimensions of education. Calls for enlarged moral…

  9. Evaluation of a two-dimensional numerical model for air quality simulation in a street canyon

    NASA Astrophysics Data System (ADS)

    Okamoto, Shin `Ichi; Lin, Fu Chi; Yamada, Hiroaki; Shiozawa, Kiyoshige

    For many urban areas, the most severe air pollution caused by automobile emissions appears along a road surrounded by tall buildings: the so=called street canyon. A practical two-dimensional numerical model has been developed to be applied to this kind of road structure. This model contains two submodels: a wind-field model and a diffusion model based on a Monte Carlo particle scheme. In order to evaluate the predictive performance of this model, an air quality simulation was carried out at three trunk roads in the Tokyo metropolitan area: Nishi-Shimbashi, Aoyama and Kanda-Nishikicho (using SF 6 as a tracer and NO x measurement). Since this model has two-dimensional properties and cannot be used for the parallel wind condition, the perpendicular wind condition was selected for the simulation. The correlation coefficients for the SF 6 and NO x data in Aoyama were 0.67 and 0.62, respectively. When predictive performance of this model is compared with other models, this model is comparable to the SRI model, and superior to the APPS three-dimensional numerical model.

  10. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge.

    PubMed

    Navarrete, Jairo A; Dartnell, Pablo

    2017-08-01

    Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.

  11. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  12. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge

    PubMed Central

    2017-01-01

    Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called “flexibility” whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena. PMID:28841643

  13. NOTE: Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool

    NASA Astrophysics Data System (ADS)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  14. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    PubMed

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  15. Outer boundary as arrested history in general relativity

    NASA Astrophysics Data System (ADS)

    Lau, Stephen R.

    2002-06-01

    We present explicit outer boundary conditions for the canonical variables of general relativity. The conditions are associated with the causal evolution of a finite Cauchy domain, a so-called quasilocal boost, and they suggest a consistent scheme for modelling such an evolution numerically. The scheme involves a continuous boost in the spacetime orthogonal complement ⊥Tp(B) of the tangent space Tp(B) belonging to each point p on the system boundary B. We show how the boost rate may be computed numerically via equations similar to those appearing in canonical investigations of black-hole thermodynamics (although here holding at an outer two-surface rather than the bifurcate two-surface of a Killing horizon). We demonstrate the numerical scheme on a model example, the quasilocal boost of a spherical three-ball in Minkowski spacetime. Developing our general formalism with recent hyperbolic formulations of the Einstein equations in mind, we use Anderson and York's 'Einstein-Christoffel' hyperbolic system as the evolution equations for our numerical simulation of the model.

  16. Numerical detection of the Gardner transition in a mean-field glass former.

    PubMed

    Charbonneau, Patrick; Jin, Yuliang; Parisi, Giorgio; Rainone, Corrado; Seoane, Beatriz; Zamponi, Francesco

    2015-07-01

    Recent theoretical advances predict the existence, deep into the glass phase, of a novel phase transition, the so-called Gardner transition. This transition is associated with the emergence of a complex free energy landscape composed of many marginally stable sub-basins within a glass metabasin. In this study, we explore several methods to detect numerically the Gardner transition in a simple structural glass former, the infinite-range Mari-Kurchan model. The transition point is robustly located from three independent approaches: (i) the divergence of the characteristic relaxation time, (ii) the divergence of the caging susceptibility, and (iii) the abnormal tail in the probability distribution function of cage order parameters. We show that the numerical results are fully consistent with the theoretical expectation. The methods we propose may also be generalized to more realistic numerical models as well as to experimental systems.

  17. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  18. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  19. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  20. Keys to success for data-driven decision making: Lessons from participatory monitoring and collaborative adaptive management

    USDA-ARS?s Scientific Manuscript database

    Recent years have witnessed a call for evidence-based decisions in conservation and natural resource management, including data-driven decision-making. Adaptive management (AM) is one prevalent model for integrating scientific data into decision-making, yet AM has faced numerous challenges and limit...

  1. Efficient numerical method for solving Cauchy problem for the Gamma equation

    NASA Astrophysics Data System (ADS)

    Koleva, Miglena N.

    2011-12-01

    In this work we consider Cauchy problem for the so called Gamma equation, derived by transforming the fully nonlinear Black-Scholes equation for option price into a quasilinear parabolic equation for the second derivative (Greek) Γ = VSS of the option price V. We develop an efficient numerical method for solving the model problem concerning different volatility terms. Using suitable change of variables the problem is transformed on finite interval, keeping original behavior of the solution at the infinity. Then we construct Picard-Newton algorithm with adaptive mesh step in time, which can be applied also in the case of non-differentiable functions. Results of numerical simulations are given.

  2. Visualized analysis of mixed numeric and categorical data via extended self-organizing map.

    PubMed

    Hsu, Chung-Chian; Lin, Shu-Han

    2012-01-01

    Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.

  3. Experimental and Numerical Investigation of Combined Sensible/Latent Thermal Energy Storage for High-Temperature Applications.

    PubMed

    Geissbühler, Lukas; Zavattoni, Simone; Barbato, Maurizio; Zanganeh, Giw; Haselbacher, Andreas; Steinfeld, Aldo

    2015-01-01

    Combined sensible/latent heat storage allows the heat-transfer fluid outflow temperature during discharging to be stabilized. A lab-scale combined storage consisting of a packed bed of rocks and steel-encapsulated AlSi(12) was investigated experimentally and numerically. Due to the small tank-to-particle diameter ratio of the lab-scale storage, void-fraction variations were not negligible, leading to channeling effects that cannot be resolved in 1D heat-transfer models. The void-fraction variations and channeling effects can be resolved in 2D models of the flow and heat transfer in the storage. The resulting so-called bypass fraction extracted from the 2D model was used in the 1D model and led to good agreement with experimental measurements.

  4. Numerical modelling of the flow in the resin infusion process on the REV scale: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabbari, M.; Spangenberg, J.; Hattel, J. H.

    2016-06-08

    The resin infusion process (RIP) has developed as a low cost method for manufacturing large fibre reinforced plastic parts. However, the process still presents some challenges to industry with regards to reliability and repeatability, resulting in expensive and inefficient trial and error development. In this paper, we show the implementation of 2D numerical models for the RIP using the open source simulator DuMu{sup X}. The idea of this study is to present a model which accounts for the interfacial forces coming from the capillary pressure on the so-called representative elementary volume (REV) scale. The model is described in detail andmore » three different test cases — a constant and a tensorial permeability as well as a preform/Balsa domain — are investigated. The results show that the developed model is very applicable for the RIP for manufacturing of composite parts. The idea behind this study is to test the developed model for later use in a real application, in which the preform medium has numerous layers with different material properties.« less

  5. A reasoned overview on Boussinesq-type models: the interplay between physics, mathematics and numerics.

    PubMed

    Brocchini, Maurizio

    2013-12-08

    This paper, which is largely the fruit of an invited talk on the topic at the latest International Conference on Coastal Engineering, describes the state of the art of modelling by means of Boussinesq-type models (BTMs). Motivations for using BTMs as well as their fundamentals are illustrated, with special attention to the interplay between the physics to be described, the chosen model equations and the numerics in use. The perspective of the analysis is that of a physicist/engineer rather than of an applied mathematician. The chronological progress of the currently available BTMs from the pioneering models of the late 1960s is given. The main applications of BTMs are illustrated, with reference to specific models and methods. The evolution in time of the numerical methods used to solve BTMs (e.g. finite differences, finite elements, finite volumes) is described, with specific focus on finite volumes. Finally, an overview of the most important BTMs currently available is presented, as well as some indications on improvements required and fields of applications that call for attention.

  6. A reasoned overview on Boussinesq-type models: the interplay between physics, mathematics and numerics

    PubMed Central

    Brocchini, Maurizio

    2013-01-01

    This paper, which is largely the fruit of an invited talk on the topic at the latest International Conference on Coastal Engineering, describes the state of the art of modelling by means of Boussinesq-type models (BTMs). Motivations for using BTMs as well as their fundamentals are illustrated, with special attention to the interplay between the physics to be described, the chosen model equations and the numerics in use. The perspective of the analysis is that of a physicist/engineer rather than of an applied mathematician. The chronological progress of the currently available BTMs from the pioneering models of the late 1960s is given. The main applications of BTMs are illustrated, with reference to specific models and methods. The evolution in time of the numerical methods used to solve BTMs (e.g. finite differences, finite elements, finite volumes) is described, with specific focus on finite volumes. Finally, an overview of the most important BTMs currently available is presented, as well as some indications on improvements required and fields of applications that call for attention. PMID:24353475

  7. Modelling groundwater fractal flow with fractional differentiation via Mittag-Leffler law

    NASA Astrophysics Data System (ADS)

    Ahokposi, D. P.; Atangana, Abdon; Vermeulen, D. P.

    2017-04-01

    Modelling the flow of groundwater within a network of fractures is perhaps one of the most difficult exercises within the field of geohydrology. This physical problem has attracted the attention of several scientists across the globe. Already two different types of differentiations have been used to attempt modelling this problem including the classical and the fractional differentiation. In this paper, we employed the most recent concept of differentiation based on the non-local and non-singular kernel called the generalized Mittag-Leffler function, to reshape the model of groundwater fractal flow. We presented the existence of positive solution of the new model. Using the fixed-point approach, we established the uniqueness of the positive solution. We solve the new model with three different numerical schemes including implicit, explicit and Crank-Nicholson numerical methods. Experimental data collected from four constant discharge tests conducted in a typical fractured crystalline rock aquifer of the Northern Limb (Bushveld Complex) in the Limpopo Province (South Africa) are compared with the numerical solutions. It is worth noting that the four boreholes (BPAC1, BPAC2, BPAC3, and BPAC4) are located on Faults.

  8. Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting

    NASA Astrophysics Data System (ADS)

    Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt

    2018-04-01

    We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.

  9. ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes

    NASA Astrophysics Data System (ADS)

    Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.

    2015-10-01

    A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.

  10. Obtaining of Analytical Relations for Hydraulic Parameters of Channels With Two Phase Flow Using Open CFD Toolbox

    NASA Astrophysics Data System (ADS)

    Varseev, E.

    2017-11-01

    The present work is dedicated to verification of numerical model in standard solver of open-source CFD code OpenFOAM for two-phase flow simulation and to determination of so-called “baseline” model parameters. Investigation of heterogeneous coolant flow parameters, which leads to abnormal friction increase of channel in two-phase adiabatic “water-gas” flows with low void fractions, presented.

  11. Effect of climate change on morphology around a port

    NASA Astrophysics Data System (ADS)

    Bharathan Radhamma, R.; Deo, M. C.

    2017-12-01

    It is well known that with the construction of a port and harbour structure the natural shoreline gets interrupted and this disturbs the surrounding coastal morphology. Added to this concern is another one of recent origin, namely, the likely impact of climate change induced by global warming. The present work addresses this issue by describing a case study at New Mangalore Port situated along the west coast of India. The harbour was formed by constructing two breakwaters along either side of the port since the year 1975. We have first determined the rate of change of the shoreline surrounding the port using historic satellite imageries over a period of 36 years. Thereafter a numerical shoreline change model: LITPACK was used to do the same and it was forced by waves simulated over a period of past 36 years varying from 1979 to 2016 and future 36 years ranging from 2016 to 2052. The wave simulation was done with the help of numerical wave model: Mike21-SW which was driven by the wind from a regional climate model called CORDEX. This climate model was earlier run for a moderate global warming pathway called: RCP-4.5. The analysis of satellite imageries indicated that in the past the shoreline change varied from -1.69 m/year to 2.56 m/year with an uncertainty of ± 0.35 m/year and approximately half of the coastal stretch faced extensive erosion. It was found that the wind and waves at this region would intensify in future and also raise the probability of occurrence of high waves. As per the numerical shoreline modelling this would give rise to a much enhanced rate of erosion, namely -2.87 m/year to -3.62 m/year. This would call for a modified shoreline management strategy around the port area. The study highlights the importance of considering potential changes in wind and wave forcing because of the climate change in evaluating future rates of shoreline changes around a port and harbour structure.

  12. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  13. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  14. A Moderated Mediation Model of the Relationship between Organizational Citizenship Behaviors and Job Performance

    ERIC Educational Resources Information Center

    Ozer, Muammer

    2011-01-01

    Addressing numerous calls for future research on understanding the theoretical mechanisms that explain the relationship between organizational citizenship behaviors (OCBs) and job performance, this study focused on how an employee's relationships with coworkers mediate the relationship between his or her OCBs and his or her job performance. It…

  15. An approach toward the numerical evaluation of multi-loop Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    2001-12-01

    A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  17. Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation

    DOE PAGES

    Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...

    1995-01-01

    In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less

  18. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  19. Improved locality of the phase-field lattice-Boltzmann model for immiscible fluids at high density ratios

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Mitchell, Travis; Leonardi, Christopher; Bolster, Diogo

    2017-11-01

    Based on phase-field theory, we introduce a robust lattice-Boltzmann equation for modeling immiscible multiphase flows at large density and viscosity contrasts. Our approach is built by modifying the method proposed by Zu and He [Phys. Rev. E 87, 043301 (2013), 10.1103/PhysRevE.87.043301] in such a way as to improve efficiency and numerical stability. In particular, we employ a different interface-tracking equation based on the so-called conservative phase-field model, a simplified equilibrium distribution that decouples pressure and velocity calculations, and a local scheme based on the hydrodynamic distribution functions for calculation of the stress tensor. In addition to two distribution functions for interface tracking and recovery of hydrodynamic properties, the only nonlocal variable in the proposed model is the phase field. Moreover, within our framework there is no need to use biased or mixed difference stencils for numerical stability and accuracy at high density ratios. This not only simplifies the implementation and efficiency of the model, but also leads to a model that is better suited to parallel implementation on distributed-memory machines. Several benchmark cases are considered to assess the efficacy of the proposed model, including the layered Poiseuille flow in a rectangular channel, Rayleigh-Taylor instability, and the rise of a Taylor bubble in a duct. The numerical results are in good agreement with available numerical and experimental data.

  20. Fractions--Concepts before Symbols.

    ERIC Educational Resources Information Center

    Bennett, Albert B., Jr.

    The learning difficulties that students experience with fractions begin immediately when they are shown fraction symbols with one numeral written above the other and told that the "top number" is called the numerator and the "bottom number" is called the denominator. This introduction to fractions will usually include a few visual diagrams to help…

  1. School Location and Teacher Supply: Understanding the Distribution of Teacher Effects

    ERIC Educational Resources Information Center

    Gagnon, Douglas

    2015-01-01

    The U.S. Department of Education has recently called on all states to create plans to ensure equal access to excellent teachers. Although there are numerous limitations in using VAM [value-added modeling] in high-stakes contexts such as teacher evaluation, such techniques offer promise in helping states grapple with issues in equitable access.…

  2. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  3. Experiments and numerical modeling of fast flowing liquid metal thin films under spatially varying magnetic field conditions

    NASA Astrophysics Data System (ADS)

    Narula, Manmeet Singh

    Innovative concepts using fast flowing thin films of liquid metals (like lithium) have been proposed for the protection of the divertor surface in magnetic fusion devices. However, concerns exist about the possibility of establishing the required flow of liquid metal thin films because of the presence of strong magnetic fields which can cause flow disrupting MHD effects. A plan is underway to design liquid lithium based divertor protection concepts for NSTX, a small spherical torus experiment at Princeton. Of these, a promising concept is the use of modularized fast flowing liquid lithium film zones, as the divertor (called the NSTX liquid surface module concept or NSTX LSM). The dynamic response of the liquid metal film flow in a spatially varying magnetic field configuration is still unknown and it is suspected that some unpredicted effects might be lurking. The primary goal of the research work being reported in this dissertation is to provide qualitative and quantitative information on the liquid metal film flow dynamics under spatially varying magnetic field conditions, typical of the divertor region of a magnetic fusion device. The liquid metal film flow dynamics have been studied through a synergic experimental and numerical modeling effort. The Magneto Thermofluid Omnibus Research (MTOR) facility at UCLA has been used to design several experiments to study the MHD interaction of liquid gallium films under a scaled NSTX outboard divertor magnetic field environment. A 3D multi-material, free surface MHD modeling capability is under development in collaboration with HyPerComp Inc., an SBIR vendor. This numerical code called HIMAG provides a unique capability to model the equations of incompressible MHD with a free surface. Some parts of this modeling capability have been developed in this research work, in the form of subroutines for HIMAG. Extensive code debugging and benchmarking exercise has also been carried out. Finally, HIMAG has been used to study the MHD interaction of fast flowing liquid metal films under various divertor relevant magnetic field configurations through numerical modeling exercises.

  4. Modeling Poroelastic Wave Propagation in a Real 2-D Complex Geological Structure Obtained via Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2018-03-01

    Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.

  5. A kinetic flux vector splitting scheme for shallow water equations incorporating variable bottom topography and horizontal temperature gradients.

    PubMed

    Saleem, M Rehan; Ashraf, Waqas; Zia, Saqib; Ali, Ishtiaq; Qamar, Shamsul

    2018-01-01

    This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme.

  6. A kinetic flux vector splitting scheme for shallow water equations incorporating variable bottom topography and horizontal temperature gradients

    PubMed Central

    2018-01-01

    This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme. PMID:29851978

  7. A concentrated parameter model for the human cardiovascular system including heart valve dynamics and atrioventricular interaction.

    PubMed

    Korakianitis, Theodosios; Shi, Yubing

    2006-09-01

    Numerical modeling of the human cardiovascular system has always been an active research direction since the 19th century. In the past, various simulation models of different complexities were proposed for different research purposes. In this paper, an improved numerical model to study the dynamic function of the human circulation system is proposed. In the development of the mathematical model, the heart chambers are described with a variable elastance model. The systemic and pulmonary loops are described based on the resistance-compliance-inertia concept by considering local effects of flow friction, elasticity of blood vessels and inertia of blood in different segments of the blood vessels. As an advancement from previous models, heart valve dynamics and atrioventricular interaction, including atrial contraction and motion of the annulus fibrosus, are specifically modeled. With these improvements the developed model can predict several important features that were missing in previous numerical models, including regurgitant flow on heart valve closure, the value of E/A velocity ratio in mitral flow, the motion of the annulus fibrosus (called the KG diaphragm pumping action), etc. These features have important clinical meaning and their changes are often related to cardiovascular diseases. Successful simulation of these features enhances the accuracy of simulations of cardiovascular dynamics, and helps in clinical studies of cardiac function.

  8. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  9. Oral Mucosa Model for Electrochemotherapy Treatment of Dog Mouth Cancer: Ex Vivo, In Silico, and In Vivo Experiments.

    PubMed

    Suzuki, Daniela O H; Berkenbrock, José A; Frederico, Marisa J S; Silva, Fátima R M B; Rangel, Marcelo M M

    2018-03-01

    Electrochemotherapy (EQT) is a local cancer treatment well established to cutaneous and subcutaneous tumors. Electric fields are applied to biological tissue in order to improve membrane permeability for cytotoxic drugs. This phenomenon is called electroporation or electropermeabilization. Studies have reported that tissue conductivity is electric field dependent. Electroporation numerical models of biological tissues are essential in treatment planning. Tumors of the mouth are very common in dogs. Inadequate EQT treatment of oral tumor may be caused by significant anatomic variations between dogs and tumor position. Numerical models of oral mucosa and tumor allow the treatment planning and optimization of electrodes for each patient. In this work, oral mucosa conductivity during electroporation was characterized by measuring applied voltage and current of ex vivo rats. This electroporation model was used with a spontaneous canine oral melanoma. The model outcomes of oral tumor EQT is applied in different parts of the oral cavity including near bones and the hard palate. The numerical modeling for treatment planning will help the development of new electrodes and increase the EQT effectiveness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  11. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  12. The role of population inertia in predicting the outcome of stage-structured biological invasions.

    PubMed

    Guiver, Chris; Dreiwi, Hanan; Filannino, Donna-Maria; Hodgson, Dave; Lloyd, Stephanie; Townley, Stuart

    2015-07-01

    Deterministic dynamic models for coupled resident and invader populations are considered with the purpose of finding quantities that are effective at predicting when the invasive population will become established asymptotically. A key feature of the models considered is the stage-structure, meaning that the populations are described by vectors of discrete developmental stage- or age-classes. The vector structure permits exotic transient behaviour-phenomena not encountered in scalar models. Analysis using a linear Lyapunov function demonstrates that for the class of population models considered, a large so-called population inertia is indicative of successful invasion. Population inertia is an indicator of transient growth or decline. Furthermore, for the class of models considered, we find that the so-called invasion exponent, an existing index used in models for invasion, is not always a reliable comparative indicator of successful invasion. We highlight these findings through numerical examples and a biological interpretation of why this might be the case is discussed. Copyright © 2015. Published by Elsevier Inc.

  13. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  14. Experimental and numerical study of drill bit drop tests on Kuru granite

    NASA Astrophysics Data System (ADS)

    Fourmeau, Marion; Kane, Alexandre; Hokka, Mikko

    2017-01-01

    This paper presents an experimental and numerical study of Kuru grey granite impacted with a seven-buttons drill bit mounted on an instrumented drop test machine. The force versus displacement curves during the impact, so-called bit-rock interaction (BRI) curves, were obtained using strain gauge measurements for two levels of impact energy. Moreover, the volume of removed rock after each drop test was evaluated by stereo-lithography (three-dimensional surface reconstruction). A modified version of the Holmquist-Johnson-Cook (MHJC) material model was calibrated using Kuru granite test results available from the literature. Numerical simulations of the single drop tests were carried out using the MHJC model available in the LS-DYNA explicit finite-element solver. The influence of the impact energy and additional confining pressure on the BRI curves and the volume of the removed rock is discussed. In addition, the influence of the rock surface shape before impact was evaluated using two different mesh geometries: a flat surface and a hyperbolic surface. The experimental and numerical results are compared and discussed in terms of drilling efficiency through the mechanical specific energy. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  15. Numerical schemes for anomalous diffusion of single-phase fluids in porous media

    NASA Astrophysics Data System (ADS)

    Awotunde, Abeeb A.; Ghanam, Ryad A.; Al-Homidan, Suliman S.; Tatar, Nasser-eddine

    2016-10-01

    Simulation of fluid flow in porous media is an indispensable part of oil and gas reservoir management. Accurate prediction of reservoir performance and profitability of investment rely on our ability to model the flow behavior of reservoir fluids. Over the years, numerical reservoir simulation models have been based mainly on solutions to the normal diffusion of fluids in the porous reservoir. Recently, however, it has been documented that fluid flow in porous media does not always follow strictly the normal diffusion process. Small deviations from normal diffusion, called anomalous diffusion, have been reported in some experimental studies. Such deviations can be caused by different factors such as the viscous state of the fluid, the fractal nature of the porous media and the pressure pulse in the system. In this work, we present explicit and implicit numerical solutions to the anomalous diffusion of single-phase fluids in heterogeneous reservoirs. An analytical solution is used to validate the numerical solution to the simple homogeneous case. The conventional wellbore flow model is modified to account for anomalous behavior. Example applications are used to show the behavior of wellbore and wellblock pressures during the single-phase anomalous flow of fluids in the reservoirs considered.

  16. Application of Gauss's law space-charge limited emission model in iterative particle tracking method

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.; Ponomarev, V. A.

    2016-11-01

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.

  17. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  18. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  19. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  20. The influence of distance between vehicles in platoon on aerodynamic parameters

    NASA Astrophysics Data System (ADS)

    Gnatowska, Renata; Sosnowski, Marcin

    2018-06-01

    The paper presents the results of experimental and numerical research focused on the reduction of fuel consumption of vehicles driving one after another in a so-called platoon arrangement. The aerodynamic parameters and safety issues were analyzed in order to determine the optimal distance between the vehicles in traffic conditions. The experimental research delivered the results concerning the drag and was performed for simplified model of two vehicles positioned in wind tunnel equipped with aerodynamic balance. The additional numerical analysis allowed investigating the pressure and velocity fields as well as other aerodynamics parameters of the test case.

  1. Analysis of High Spatial, Temporal, and Directional Resolution Recordings of Biological Sounds in the Southern California Bight

    DTIC Science & Technology

    2013-09-30

    transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main

  2. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    NASA Astrophysics Data System (ADS)

    Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel

    In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. Themore » results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.« less

  4. Numerical computations of the dynamics of fluidic membranes and vesicles

    NASA Astrophysics Data System (ADS)

    Barrett, John W.; Garcke, Harald; Nürnberg, Robert

    2015-11-01

    Vesicles and many biological membranes are made of two monolayers of lipid molecules and form closed lipid bilayers. The dynamical behavior of vesicles is very complex and a variety of forms and shapes appear. Lipid bilayers can be considered as a surface fluid and hence the governing equations for the evolution include the surface (Navier-)Stokes equations, which in particular take the membrane viscosity into account. The evolution is driven by forces stemming from the curvature elasticity of the membrane. In addition, the surface fluid equations are coupled to bulk (Navier-)Stokes equations. We introduce a parametric finite-element method to solve this complex free boundary problem and present the first three-dimensional numerical computations based on the full (Navier-)Stokes system for several different scenarios. For example, the effects of the membrane viscosity, spontaneous curvature, and area difference elasticity (ADE) are studied. In particular, it turns out, that even in the case of no viscosity contrast between the bulk fluids, the tank treading to tumbling transition can be obtained by increasing the membrane viscosity. Besides the classical tank treading and tumbling motions, another mode (called the transition mode in this paper, but originally called the vacillating-breathing mode and subsequently also called trembling, transition, and swinging mode) separating these classical modes appears and is studied by us numerically. We also study how features of equilibrium shapes in the ADE and spontaneous curvature models, like budding behavior or starfish forms, behave in a shear flow.

  5. Modeling the stepping mechanism in negative lightning leaders

    NASA Astrophysics Data System (ADS)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  6. An endorsement-based approach to student modeling for planner-controlled intelligent tutoring systems

    NASA Technical Reports Server (NTRS)

    Murray, William R.

    1990-01-01

    An approach is described to student modeling for intelligent tutoring systems based on an explicit representation of the tutor's beliefs about the student and the arguments for and against those beliefs (called endorsements). A lexicographic comparison of arguments, sorted according to evidence reliability, provides a principled means of determining those beliefs that are considered true, false, or uncertain. Each of these beliefs is ultimately justified by underlying assessment data. The endorsement-based approach to student modeling is particularly appropriate for tutors controlled by instructional planners. These tutors place greater demands on a student model than opportunistic tutors. Numerical calculi approaches are less well-suited because it is difficult to correctly assign numbers for evidence reliability and rule plausibility. It may also be difficult to interpret final results and provide suitable combining functions. When numeric measures of uncertainty are used, arbitrary numeric thresholds are often required for planning decisions. Such an approach is inappropriate when robust context-sensitive planning decisions must be made. A TMS-based implementation of the endorsement-based approach to student modeling is presented, this approach is compared to alternatives, and a project history is provided describing the evolution of this approach.

  7. Development of a linearized unsteady Euler analysis for turbomachinery blade rows

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Montgomery, Matthew D.; Kousen, Kenneth A.

    1995-01-01

    A linearized unsteady aerodynamic analysis for axial-flow turbomachinery blading is described in this report. The linearization is based on the Euler equations of fluid motion and is motivated by the need for an efficient aerodynamic analysis that can be used in predicting the aeroelastic and aeroacoustic responses of blade rows. The field equations and surface conditions required for inviscid, nonlinear and linearized, unsteady aerodynamic analyses of three-dimensional flow through a single, blade row operating within a cylindrical duct, are derived. An existing numerical algorithm for determining time-accurate solutions of the nonlinear unsteady flow problem is described, and a numerical model, based upon this nonlinear flow solver, is formulated for the first-harmonic linear unsteady problem. The linearized aerodynamic and numerical models have been implemented into a first-harmonic unsteady flow code, called LINFLUX. At present this code applies only to two-dimensional flows, but an extension to three-dimensions is planned as future work. The three-dimensional aerodynamic and numerical formulations are described in this report. Numerical results for two-dimensional unsteady cascade flows, excited by prescribed blade motions and prescribed aerodynamic disturbances at inlet and exit, are also provided to illustrate the present capabilities of the LINFLUX analysis.

  8. Principles of Air Defense and Air Vehicle Penetration

    DTIC Science & Technology

    2000-03-01

    Range For reliable dateetien, the target signal must reach some minimum or threshold value called S . . When internal noise is the only interfer...analyze air defense and air vehicle penetration. Unique expected value models are developed with frequent numerical examples. Radar...penetrator in the presence of spurious returns from internal and external noise will be discussed. Tracking With sufficient sensor information to determine

  9. The Design of Preservice Primary Teacher Education Science Subjects: The Emergence of an Interactive Educational Design Model

    ERIC Educational Resources Information Center

    McKinnon, David H.; Danaia, Lena; Deehan, James

    2017-01-01

    Over the past 20 years there have been numerous calls in Australia and beyond for extensive educational reforms to preservice teacher education in the sciences. Recommendations for science teacher education programs to integrate curriculum, instruction and assessment are at the forefront of such reforms. In this paper, we describe our scholarly…

  10. Using adaptive grid in modeling rocket nozzle flow

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1992-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.

  11. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  12. Fast calculation of low altitude disturbing gravity for ballistics

    NASA Astrophysics Data System (ADS)

    Wang, Jianqiang; Wang, Fanghao; Tian, Shasha

    2018-03-01

    Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.

  13. Collective opinion formation model under Bayesian updating and confirmation bias

    NASA Astrophysics Data System (ADS)

    Nishi, Ryosuke; Masuda, Naoki

    2013-06-01

    We propose a collective opinion formation model with a so-called confirmation bias. The confirmation bias is a psychological effect with which, in the context of opinion formation, an individual in favor of an opinion is prone to misperceive new incoming information as supporting the current belief of the individual. Our model modifies a Bayesian decision-making model for single individuals [M. Rabin and J. L. Schrag, Q. J. Econ.0033-553310.1162/003355399555945 114, 37 (1999)] for the case of a well-mixed population of interacting individuals in the absence of the external input. We numerically simulate the model to show that all the agents eventually agree on one of the two opinions only when the confirmation bias is weak. Otherwise, the stochastic population dynamics ends up creating a disagreement configuration (also called polarization), particularly for large system sizes. A strong confirmation bias allows various final disagreement configurations with different fractions of the individuals in favor of the opposite opinions.

  14. Tempered fractional calculus

    NASA Astrophysics Data System (ADS)

    Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua

    2015-07-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  15. TEMPERED FRACTIONAL CALCULUS.

    PubMed

    Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  16. TEMPERED FRACTIONAL CALCULUS

    PubMed Central

    MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA

    2014-01-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690

  17. Tempered fractional calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less

  18. Multi-Model Ensemble Approaches to Data Assimilation Using the 4D-Local Ensemble Transform Kalman Filter

    DTIC Science & Technology

    2013-09-30

    accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis

  19. Flexible scheme to truncate the hierarchy of pure states.

    PubMed

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  20. Flexible scheme to truncate the hierarchy of pure states

    NASA Astrophysics Data System (ADS)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  1. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  2. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  3. Equivalent Viscous Damping Methodologies Applied on VEGA Launch Vehicle Numerical Model

    NASA Astrophysics Data System (ADS)

    Bartoccini, D.; Di Trapani, C.; Fransen, S.

    2014-06-01

    Part of the mission analysis of a spacecraft is the so- called launcher-satellite coupled loads analysis which aims at computing the dynamic environment of the satellite and of the launch vehicle for the most severe load cases in flight. Evidently the damping of the coupled system shall be defined with care as to not overestimate or underestimate the loads derived for the spacecraft. In this paper the application of several EqVD (Equivalent Viscous Damping) for Craig an Bampton (CB)-systems are investigated. Based on the structural damping defined for the various materials in the parent FE-models of the CB-components, EqVD matrices can be computed according to different methodologies. The effect of these methodologies on the numerical reconstruction of the VEGA launch vehicle dynamic environment will be presented.

  4. Experimental and numerical modeling of heat transfer in directed thermoplates

    DOE PAGES

    Khalil, Imane; Hayes, Ryan; Pratt, Quinn; ...

    2018-03-20

    We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less

  5. Experimental and numerical modeling of heat transfer in directed thermoplates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Imane; Hayes, Ryan; Pratt, Quinn

    We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less

  6. Phenomenological approach to mechanical damage growth analysis.

    PubMed

    Pugno, Nicola; Bosia, Federico; Gliozzi, Antonio S; Delsanto, Pier Paolo; Carpinteri, Alberto

    2008-10-01

    The problem of characterizing damage evolution in a generic material is addressed with the aim of tracing it back to existing growth models in other fields of research. Based on energetic considerations, a system evolution equation is derived for a generic damage indicator describing a material system subjected to an increasing external stress. The latter is found to fit into the framework of a recently developed phenomenological universality (PUN) approach and, more specifically, the so-called U2 class. Analytical results are confirmed by numerical simulations based on a fiber-bundle model and statistically assigned local strengths at the microscale. The fits with numerical data prove, with an excellent degree of reliability, that the typical evolution of the damage indicator belongs to the aforementioned PUN class. Applications of this result are briefly discussed and suggested.

  7. New Rapid Evaluation for Long-Term Behavior in Deep Geological Repository by Geotechnical Centrifuge—Part 2: Numerical Simulation of Model Tests in Isothermal Condition

    NASA Astrophysics Data System (ADS)

    Sawada, Masataka; Nishimoto, Soshi; Okada, Tetsuji

    2017-01-01

    In high-level radioactive waste disposal repositories, there are long-term complex thermal, hydraulic, and mechanical (T-H-M) phenomena that involve the generation of heat from the waste, the infiltration of ground water, and swelling of the bentonite buffer. The ability to model such coupled phenomena is of particular importance to the repository design and assessments of its safety. We have developed a T-H-M-coupled analysis program that evaluates the long-term behavior around the repository (called "near-field"). We have also conducted centrifugal model tests that model the long-term T-H-M-coupled behavior in the near-field. In this study, we conduct H-M-coupled numerical simulations of the centrifugal near-field model tests. We compare numerical results with each other and with results obtained from the centrifugal model tests. From the comparison, we deduce that: (1) in the numerical simulation, water infiltration in the rock mass was in agreement with the experimental observation. (2) The constant-stress boundary condition in the centrifugal model tests may cause a larger expansion of the rock mass than in the in situ condition, but the mechanical boundary condition did not affect the buffer behavior in the deposition hole. (3) The numerical simulation broadly reproduced the measured bentonite pressure and the overpack displacement, but did not reproduce the decreasing trend of the bentonite pressure after 100 equivalent years. This indicates the effect of the time-dependent characteristics of the surrounding rock mass. Further investigations are needed to determine the effect of initial heterogeneity in the deposition hole and the time-dependent behavior of the surrounding rock mass.

  8. Measurement and Simulation of Low Frequency Impulse Noise and Ground Vibration from Airblasts

    NASA Astrophysics Data System (ADS)

    Hole, L. R.; Kaynia, A. M.; Madshus, C.

    1998-07-01

    This paper presents numerical simulations of low frequency ground vibration and dynamic overpressure in air using two different numerical models. Analysis is based on actual recordings during blast tests at Haslemoen test site in Norway in June 1994. It is attempted to use the collected airblast-induced overpressures and ground vibrations in order to asses the applicability of the two models. The first model is a computer code which is based on a global representation of ground and atmospheric layers, a so-called Fast Field Program (FFP). A viscoelastic and a poroelastic version of this model is used. The second model is a two-dimensionalmoving-loadformulation for the propagation of airblast over ground. The poroelastic FFP gives the most complete and realistic reproduction of the processes involved, including decay of peak overpressure amplitude and dominant frequency of signals with range. It turns out that themoving-loadformulation does not provide a complete description of the physics involved when the speed of sound in air is different from the ground wavespeeds.

  9. A Test of Maxwell's Z Model Using Inverse Modeling

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, T.

    2003-01-01

    In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.

  10. A new numerical approximation of the fractal ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Atangana, Abdon; Jain, Sonal

    2018-02-01

    The concept of fractal medium is present in several real-world problems, for instance, in the geological formation that constitutes the well-known subsurface water called aquifers. However, attention has not been quite devoted to modeling for instance, the flow of a fluid within these media. We deem it important to remind the reader that the concept of fractal derivative is not to represent the fractal sharps but to describe the movement of the fluid within these media. Since this class of ordinary differential equations is highly complex to solve analytically, we present a novel numerical scheme that allows to solve fractal ordinary differential equations. Error analysis of the method is also presented. Application of the method and numerical approximation are presented for fractal order differential equation. The stability and the convergence of the numerical schemes are investigated in detail. Also some exact solutions of fractal order differential equations are presented and finally some numerical simulations are presented.

  11. Understanding asteroid collisional history through experimental and numerical studies

    NASA Technical Reports Server (NTRS)

    Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.

    1991-01-01

    Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.

  12. Understanding asteroid collisional history through experimental and numerical studies

    NASA Astrophysics Data System (ADS)

    Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.

    1991-06-01

    Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.

  13. Gaussian variational ansatz in the problem of anomalous sea waves: Comparison with direct numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruban, V. P., E-mail: ruban@itp.ac.ru

    2015-05-15

    The nonlinear dynamics of an obliquely oriented wave packet on a sea surface is analyzed analytically and numerically for various initial parameters of the packet in relation to the problem of the so-called rogue waves. Within the Gaussian variational ansatz applied to the corresponding (1+2)-dimensional hyperbolic nonlinear Schrödinger equation (NLSE), a simplified Lagrangian system of differential equations is derived that describes the evolution of the coefficients of the real and imaginary quadratic forms appearing in the Gaussian. This model provides a semi-quantitative description of the process of nonlinear spatiotemporal focusing, which is one of the most probable mechanisms of roguemore » wave formation in random wave fields. The system of equations is integrated in quadratures, which allows one to better understand the qualitative differences between linear and nonlinear focusing regimes of a wave packet. Predictions of the Gaussian model are compared with the results of direct numerical simulation of fully nonlinear long-crested waves.« less

  14. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2013-09-30

    passive acoustic monitoring: Correcting humpback whale call detections for site-specific and time-dependent environmental characteristics ,” JASA Exp...marine mammal species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in...minimize the variance of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions

  15. How to Measure Qualitative Understanding of DC-Circuit Phenomena--Taking a Closer Look at the External Representations of 9-Year-Olds

    ERIC Educational Resources Information Center

    Kallunki, Veera

    2013-01-01

    Pupils' qualitative understanding of DC-circuit phenomena is reported to be weak. In numerous research reports lists of problems in understanding the functioning of simple DC-circuits have been presented. So-called mental model surveys have uncovered difficulties in different age groups, and in different phases of instruction. In this study, the…

  16. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  17. Discrete effect on the halfway bounce-back boundary condition of multiple-relaxation-time lattice Boltzmann model for convection-diffusion equations.

    PubMed

    Cui, Shuqi; Hong, Ning; Shi, Baochang; Chai, Zhenhua

    2016-04-01

    In this paper, we will focus on the multiple-relaxation-time (MRT) lattice Boltzmann model for two-dimensional convection-diffusion equations (CDEs), and analyze the discrete effect on the halfway bounce-back (HBB) boundary condition (or sometimes called bounce-back boundary condition) of the MRT model where three different discrete velocity models are considered. We first present a theoretical analysis on the discrete effect of the HBB boundary condition for the simple problems with a parabolic distribution in the x or y direction, and a numerical slip proportional to the second-order of lattice spacing is observed at the boundary, which means that the MRT model has a second-order convergence rate in space. The theoretical analysis also shows that the numerical slip can be eliminated in the MRT model through tuning the free relaxation parameter corresponding to the second-order moment, while it cannot be removed in the single-relaxation-time model or the Bhatnagar-Gross-Krook model unless the relaxation parameter related to the diffusion coefficient is set to be a special value. We then perform some simulations to confirm our theoretical results, and find that the numerical results are consistent with our theoretical analysis. Finally, we would also like to point out the present analysis can be extended to other boundary conditions of lattice Boltzmann models for CDEs.

  18. Why does shear banding behave like first-order phase transitions? Derivation of a potential from a mechanical constitutive model.

    PubMed

    Sato, K; Yuan, X-F; Kawakatsu, T

    2010-02-01

    Numerous numerical and experimental evidence suggest that shear banding behavior looks like first-order phase transitions. In this paper, we demonstrate that this correspondence is actually established in the so-called non-local diffusive Johnson-Segalman model (the DJS model), a typical mechanical constitutive model that has been widely used for describing shear banding phenomena. In the neighborhood of the critical point, we apply the reduction procedure based on the center manifold theory to the governing equations of the DJS model. As a result, we obtain a time evolution equation of the flow field that is equivalent to the time-dependent Ginzburg-Landau (TDGL) equations for modeling thermodynamic first-order phase transitions. This result, for the first time, provides a mathematical proof that there is an analogy between the mechanical instability and thermodynamic phase transition at least in the vicinity of the critical point of the shear banding of DJS model. Within this framework, we can clearly distinguish the metastable branch in the stress-strain rate curve around the shear banding region from the globally stable branch. A simple extension of this analysis to a class of more general constitutive models is also discussed. Numerical simulations for the original DJS model and the reduced TDGL equation is performed to confirm the range of validity of our reduction theory.

  19. Mass and energy flows between the Solar chromosphere, transition region, and corona

    NASA Astrophysics Data System (ADS)

    Hansteen, V. H.

    2017-12-01

    A number of increasingly sophisticated numerical simulations spanning the convection zone to corona have shed considerable insight into the role of the magnetic field in the structure and energetics of the Sun's outer atmosphere. This development is strengthened by the wealth of observational data now coming on-line from both ground based and space borne observatories. We discuss what numerical models can tell us about the mass and energy flows in the region of the upper chromosphere and lower corona, using a variety of tools, including the direct comparison with data and the use of passive tracer particles (so-called 'corks') inserted into the simulated flows.

  20. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  1. Experimental and numerical study of drill bit drop tests on Kuru granite.

    PubMed

    Fourmeau, Marion; Kane, Alexandre; Hokka, Mikko

    2017-01-28

    This paper presents an experimental and numerical study of Kuru grey granite impacted with a seven-buttons drill bit mounted on an instrumented drop test machine. The force versus displacement curves during the impact, so-called bit-rock interaction (BRI) curves, were obtained using strain gauge measurements for two levels of impact energy. Moreover, the volume of removed rock after each drop test was evaluated by stereo-lithography (three-dimensional surface reconstruction). A modified version of the Holmquist-Johnson-Cook (MHJC) material model was calibrated using Kuru granite test results available from the literature. Numerical simulations of the single drop tests were carried out using the MHJC model available in the LS-DYNA explicit finite-element solver. The influence of the impact energy and additional confining pressure on the BRI curves and the volume of the removed rock is discussed. In addition, the influence of the rock surface shape before impact was evaluated using two different mesh geometries: a flat surface and a hyperbolic surface. The experimental and numerical results are compared and discussed in terms of drilling efficiency through the mechanical specific energy.This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'. © 2016 The Author(s).

  2. Experimental and numerical study of drill bit drop tests on Kuru granite

    PubMed Central

    Kane, Alexandre; Hokka, Mikko

    2017-01-01

    This paper presents an experimental and numerical study of Kuru grey granite impacted with a seven-buttons drill bit mounted on an instrumented drop test machine. The force versus displacement curves during the impact, so-called bit–rock interaction (BRI) curves, were obtained using strain gauge measurements for two levels of impact energy. Moreover, the volume of removed rock after each drop test was evaluated by stereo-lithography (three-dimensional surface reconstruction). A modified version of the Holmquist–Johnson–Cook (MHJC) material model was calibrated using Kuru granite test results available from the literature. Numerical simulations of the single drop tests were carried out using the MHJC model available in the LS-DYNA explicit finite-element solver. The influence of the impact energy and additional confining pressure on the BRI curves and the volume of the removed rock is discussed. In addition, the influence of the rock surface shape before impact was evaluated using two different mesh geometries: a flat surface and a hyperbolic surface. The experimental and numerical results are compared and discussed in terms of drilling efficiency through the mechanical specific energy. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956511

  3. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  4. Global Simulations of Dynamo and Magnetorotational Instability in Madison Plasma Experiments and Astrophysical Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebrahimi, Fatima

    2014-07-31

    Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less

  5. Numerical Convergence in the Dark Matter Halos Properties Using Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Mosquera-Escobar, X. E.; Muñoz-Cuartas, J. C.

    2017-07-01

    Nowadays, the accepted cosmological model is the so called -Cold Dark Matter (CDM). In such model, the universe is considered to be homogeneous and isotropic, composed of diverse components as the dark matter and dark energy, where the latter is the most abundant one. Dark matter plays an important role because it is responsible for the generation of gravitational potential wells, commonly called dark matter halos. At the end, dark matter halos are characterized by a set of parameters (mass, radius, concentration, spin parameter), these parameters provide valuable information for different studies, such as galaxy formation, gravitational lensing, etc. In this work we use the publicly available code Gadget2 to perform cosmological simulations to find to what extent the numerical parameters of the simu- lations, such as gravitational softening, integration time step and force calculation accuracy affect the physical properties of the dark matter halos. We ran a suite of simulations where these parameters were varied in a systematic way in order to explore accurately their impact on the structural parameters of dark matter halos. We show that the variations on the numerical parameters affect the structural pa- rameters of dark matter halos, such as concentration, virial radius, and concentration. We show that these modifications emerged when structures become non- linear (at redshift 2) for the scale of our simulations, such that these variations affected the formation and evolution structure of halos mainly at later cosmic times. As a quantitative result, we propose which would be the most appropriate values for the numerical parameters of the simulations, such that they do not affect the halo properties that are formed. For force calculation accuracy we suggest values smaller or equal to 0.0001, integration time step smaller o equal to 0.005 and for gravitational softening we propose equal to 1/60th of the mean interparticle distance, these values, correspond to the smaller values in the numerical parameters variations. This is an important numerical exercise, since for instance, it is believed that galaxy structural parameters are strongly dependent on dark matter halo structural parameters.

  6. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  7. LES models for incompressible magnetohydrodynamics derived from the variational multiscale formulation

    NASA Astrophysics Data System (ADS)

    Sondak, David; Oberai, Assad

    2012-10-01

    Novel large eddy simulation (LES) models are developed for incompressible magnetohydrodynamics (MHD). These models include the application of the variational multiscale formulation (VMS) of LES to the equations of incompressible MHD, a new residual-based eddy viscosity model (RBEVM,) and a mixed LES model that combines the strengths of both of these models. The new models result in a consistent numerical method that is relatively simple to implement. A dynamic procedure for determining model coefficients is no longer required. The new LES models are tested on a decaying Taylor-Green vortex generalized to MHD and benchmarked against classical and state-of-the art LES turbulence models as well as direct numerical simulations (DNS). These new models are able to account for the essential MHD physics which is demonstrated via comparisons of energy spectra. We also compare the performance of our models to a DNS simulation by A. Pouquet et al., for which the ratio of DNS modes to LES modes is 262,144. Additionally, we extend these models to a finite element setting in which boundary conditions play a role. A classic problem on which we test these models is turbulent channel flow, which in the case of MHD, is called Hartmann flow.

  8. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  9. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  10. Optical depth in particle-laden turbulent flows

    NASA Astrophysics Data System (ADS)

    Frankel, A.; Iaccarino, G.; Mani, A.

    2017-11-01

    Turbulent clustering of particles causes an increase in the radiation transmission through gas-particle mixtures. Attempts to capture the ensemble-averaged transmission lead to a closure problem called the turbulence-radiation interaction. A simple closure model based on the particle radial distribution function is proposed to capture the effect of turbulent fluctuations in the concentration on radiation intensity. The model is validated against a set of particle-resolved ray tracing experiments through particle fields from direct numerical simulations of particle-laden turbulence. The form of the closure model is generalizable to arbitrary stochastic media with known two-point correlation functions.

  11. The Contact Dynamics method: A nonsmooth story

    NASA Astrophysics Data System (ADS)

    Dubois, Frédéric; Acary, Vincent; Jean, Michel

    2018-03-01

    When velocity jumps are occurring, the dynamics is said to be nonsmooth. For instance, in collections of contacting rigid bodies, jumps are caused by shocks and dry friction. Without compliance at the interface, contact laws are not only non-differentiable in the usual sense but also multi-valued. Modeling contacting bodies is of interest in order to understand the behavior of numerous mechanical systems such as flexible multi-body systems, granular materials or masonry. These granular materials behave puzzlingly either like a solid or a fluid and a description in the frame of classical continuous mechanics would be welcome though far to be satisfactory nowadays. Jean-Jacques Moreau greatly contributed to convex analysis, functions of bounded variations, differential measure theory, sweeping process theory, definitive mathematical tools to deal with nonsmooth dynamics. He converted all these underlying theoretical ideas into an original nonsmooth implicit numerical method called Contact Dynamics (CD); a robust and efficient method to simulate large collections of bodies with frictional contacts and impacts. The CD method offers a very interesting complementary alternative to the family of smoothed explicit numerical methods, often called Distinct Elements Method (DEM). In this paper developments and improvements of the CD method are presented together with a critical comparative review of advantages and drawbacks of both approaches. xml:lang="fr"

  12. A critical comparison of second order closures with direct numerical simulation of homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Lumley, John L.

    1991-01-01

    Recently, several second order closure models have been proposed for closing the second moment equations, in which the velocity-pressure gradient (and scalar-pressure gradient) tensor and the dissipation rate tensor are two of the most important terms. In the literature, these correlation tensors are usually decomposed into a so called rapid term and a return-to-isotropy term. Models of these terms have been used in global flow calculations together with other modeled terms. However, their individual behavior in different flows have not been fully examined because they are un-measurable in the laboratory. Recently, the development of direct numerical simulation (DNS) of turbulence has given us the opportunity to do this kind of study. With the direct numerical simulation, we may use the solution to exactly calculate the values of these correlation terms and then directly compare them with the values from their modeled formulations (models). Here, we make direct comparisons of five representative rapid models and eight return-to-isotropy models using the DNS data of forty five homogeneous flows which were done by Rogers et al. (1986) and Lee et al. (1985). The purpose of these direct comparisons is to explore the performance of these models in different flows and identify the ones which give the best performance. The modeling procedure, model constraints, and the various evaluated models are described. The detailed results of the direct comparisons are discussed, and a few concluding remarks on turbulence models are given.

  13. A discrete geometric approach for simulating the dynamics of thin viscous threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audoly, B., E-mail: audoly@lmm.jussieu.fr; Clauvelin, N.; Brun, P.-T.

    We present a numerical model for the dynamics of thin viscous threads based on a discrete, Lagrangian formulation of the smooth equations. The model makes use of a condensed set of coordinates, called the centerline/spin representation: the kinematic constraints linking the centerline's tangent to the orientation of the material frame is used to eliminate two out of three degrees of freedom associated with rotations. Based on a description of twist inspired from discrete differential geometry and from variational principles, we build a full-fledged discrete viscous thread model, which includes in particular a discrete representation of the internal viscous stress. Consistencymore » of the discrete model with the classical, smooth equations for thin threads is established formally. Our numerical method is validated against reference solutions for steady coiling. The method makes it possible to simulate the unsteady behavior of thin viscous threads in a robust and efficient way, including the combined effects of inertia, stretching, bending, twisting, large rotations and surface tension.« less

  14. The Latest on the Venus Thermospheric General Circulation Model: Capabilities and Simulations

    NASA Technical Reports Server (NTRS)

    Brecht, A. S.; Bougher, S. W.; Parkinson, C. D.

    2017-01-01

    Venus has a complex and dynamic upper atmosphere. This has been observed many times by ground-based, orbiters, probes, and fly-by missions going to other planets. Two over-arching questions are generally asked when examining the Venus upper atmosphere: (1) what creates the complex structure in the atmosphere, and (2) what drives the varying dynamics. A great way to interpret and connect observations to address these questions utilizes numerical modeling; and in the case of the middle and upper atmosphere (above the cloud tops), a 3D hydrodynamic numerical model called the Venus Thermospheric General Circulation Model (VTGCM) can be used. The VTGCM can produce climatological averages of key features in comparison to observations (i.e. nightside temperature, O2 IR nightglow emission). More recently, the VTGCM has been expanded to include new chemical constituents and airglow emissions, as well as new parameterizations to address waves and their impact on the varying global circulation and corresponding airglow distributions.

  15. An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael R Tonks; Derek R Gaston; Paul C Millett

    2012-01-01

    The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of newmore » models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.« less

  16. Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows

    NASA Astrophysics Data System (ADS)

    Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.

    2014-12-01

    The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.

  17. The structure of a market containing boundedly rational firms

    NASA Astrophysics Data System (ADS)

    Ibrahim, Adyda; Zura, Nerda; Saaban, Azizan

    2017-11-01

    The structure of a market is determined by the number of active firms in it. Over time, this number is affected by the exit of existing firms, called incumbents, and entries of new firms, called entrant. In this paper, we considered a market governed by the Cobb-Douglas utility function such that the demand function is isoelastic. Each firm is assumed to produce a single homogenous product under a constant unit cost. Furthermore, firms are assumed to be boundedly rational in adjusting their outputs at each period. A firm is considered to exit the market if its output is negative. In this paper, the market is assumed to have zero barrier-to-entry. Therefore, the exiting firm can reenter the market if its output is positive again, and new firms can enter the market easily. Based on these assumptions and rules, a mathematical model was developed and numerical simulations were run using Matlab. By setting certain values for the parameters in the model, initial numerical simulations showed that in the long run, the number of firms that manages to survive the market varies between zero to 30. This initial result is consistent with the idea that a zero barrier-to-entry may produce a perfectly competitive market.

  18. Changing Weather Extremes Call for Early Warning of Potential for Catastrophic Fire

    NASA Astrophysics Data System (ADS)

    Boer, Matthias M.; Nolan, Rachael H.; Resco De Dios, Víctor; Clarke, Hamish; Price, Owen F.; Bradstock, Ross A.

    2017-12-01

    Changing frequencies of extreme weather events and shifting fire seasons call for enhanced capability to forecast where and when forested landscapes switch from a nonflammable (i.e., wet fuel) state to the highly flammable (i.e., dry fuel) state required for catastrophic forest fires. Current forest fire danger indices used in Europe, North America, and Australia rate potential fire behavior by combining numerical indices of fuel moisture content, potential rate of fire spread, and fire intensity. These numerical rating systems lack the physical basis required to reliably quantify forest flammability outside the environments of their development or under novel climate conditions. Here, we argue that exceedance of critical forest flammability thresholds is a prerequisite for major forest fires and therefore early warning systems should be based on a reliable prediction of fuel moisture content plus a regionally calibrated model of how forest fire activity responds to variation in fuel moisture content. We demonstrate the potential of this approach through a case study in Portugal. We use a physically based fuel moisture model with historical weather and fire records to identify critical fuel moisture thresholds for forest fire activity and then show that the catastrophic June 2017 forest fires in central Portugal erupted shortly after fuels in the region dried out to historically unprecedented levels.

  19. An Open Simulation System Model for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Williams, Anthony D.

    1995-01-01

    A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.

  20. Conformity and Dissonance in Generalized Voter Models

    NASA Astrophysics Data System (ADS)

    Page, Scott E.; Sander, Leonard M.; Schneider-Mizell, Casey M.

    2007-09-01

    We generalize the voter model to include social forces that produce conformity among voters and avoidance of cognitive dissonance of opinions within a voter. The time for both conformity and consistency (which we call the exit time) is, in general, much longer than for either process alone. We show that our generalized model can be applied quite widely: it is a form of Wright's island model of population genetics, and is related to problems in the physical sciences. We give scaling arguments, numerical simulations, and analytic estimates for the exit time for a range of relative strengths in the tendency to conform and to avoid dissonance.

  1. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1996-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.

  2. Input-output relationship in social communications characterized by spike train analysis

    NASA Astrophysics Data System (ADS)

    Aoki, Takaaki; Takaguchi, Taro; Kobayashi, Ryota; Lambiotte, Renaud

    2016-10-01

    We study the dynamical properties of human communication through different channels, i.e., short messages, phone calls, and emails, adopting techniques from neuronal spike train analysis in order to characterize the temporal fluctuations of successive interevent times. We first measure the so-called local variation (LV) of incoming and outgoing event sequences of users and find that these in- and out-LV values are positively correlated for short messages and uncorrelated for phone calls and emails. Second, we analyze the response-time distribution after receiving a message to focus on the input-output relationship in each of these channels. We find that the time scales and amplitudes of response differ between the three channels. To understand the effects of the response-time distribution on the correlations between the LV values, we develop a point process model whose activity rate is modulated by incoming and outgoing events. Numerical simulations of the model indicate that a quick response to incoming events and a refractory effect after outgoing events are key factors to reproduce the positive LV correlations.

  3. A stable numerical solution method in-plane loading of nonlinear viscoelastic laminated orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1989-01-01

    In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.

  4. Numerical modelling of vehicular pollution dispersion: The application of computational fluid dynamics techniques, a case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderheyden, M.D.; Dajka, S.C.; Sinclair, R.

    1997-12-31

    Numerical modelling of vehicular emissions using the United States Environmental Protection Agency`s CALINE4 and CAL3QHC dispersion models to predict air quality impacts in the vicinity of roadways is a widely accepted means of evaluating vehicular emissions impacts. The numerical models account for atmospheric dispersion in both open or suburban terrains. When assessing roadways in urban areas with numerous large buildings, however, the models are unable to account for the complex airflows and therefore do not provide satisfactory estimates of pollutant concentrations. Either Wind Tunnel Modelling or Computational Fluid Dynamics (CFD) techniques can be used to assess the impact of vehiclemore » emissions in an urban core. This paper presents a case study where CFD is used to predict worst-case air quality impacts for two development configurations: an existing roadway configuration and a proposed configuration with an elevated pedestrian walkway. In assessing these configurations, worst-case meteorology and traffic conditions are modeled to allow for the prediction of pollutant concentrations due to vehicular emissions on two major streets in Hong Kong. The CFD modelling domain is divided up into thousands of control volumes. Each of these control volumes has a central point called a node where velocities, pollutant concentration and other auxiliary variables are calculated. The region of interest, the pedestrian link and its immediate surroundings, has a denser distribution of nodes in order to give a better resolution of local flow details. Separate CFD modelling runs were undertaken for each development configuration for wind direction increments of 15 degrees. For comparison of the development scenarios, pollutant concentrations (carbon monoxide, nitrogen dioxide and particulate matter) are predicted at up to 99 receptor nodes representing sensitive locations.« less

  5. Parallel numerical modeling of hybrid-dimensional compositional non-isothermal Darcy flows in fractured porous media

    NASA Astrophysics Data System (ADS)

    Xing, F.; Masson, R.; Lopez, S.

    2017-09-01

    This paper introduces a new discrete fracture model accounting for non-isothermal compositional multiphase Darcy flows and complex networks of fractures with intersecting, immersed and non-immersed fractures. The so called hybrid-dimensional model using a 2D model in the fractures coupled with a 3D model in the matrix is first derived rigorously starting from the equi-dimensional matrix fracture model. Then, it is discretized using a fully implicit time integration combined with the Vertex Approximate Gradient (VAG) finite volume scheme which is adapted to polyhedral meshes and anisotropic heterogeneous media. The fully coupled systems are assembled and solved in parallel using the Single Program Multiple Data (SPMD) paradigm with one layer of ghost cells. This strategy allows for a local assembly of the discrete systems. An efficient preconditioner is implemented to solve the linear systems at each time step and each Newton type iteration of the simulation. The numerical efficiency of our approach is assessed on different meshes, fracture networks, and physical settings in terms of parallel scalability, nonlinear convergence and linear convergence.

  6. The 3-D viscous flow CFD analysis of the propeller effect on an advanced ducted propeller subsonic inlet

    NASA Technical Reports Server (NTRS)

    Iek, Chanthy; Boldman, Donald R.; Ibrahim, Mounir

    1993-01-01

    A time marching Navier-Stokes code called PARC3D was used to study the 3-D viscous flow associated with an advanced ducted propeller (ADP) subsonic inlet at take-off operating conditions. At a free stream Mach number of 0.2, experimental data for the inlet-with-propeller test model indicated that the airflow was attached on the cowl windward lip at an angle of attack of 25 degrees became unstable at 29 degrees, and separated at 30 degrees. An experimental study with a similar inlet and with no propeller (through-flow) indicated that flow separation occurred at an angle of attack a few degrees below the value observed when the inlet was tested with the propeller. This tends to indicate that the propeller exerts a favorable effect on the inlet performance. During the through-flow experiment a stationary blockage device was used to successfully simulate the propeller effect on the inlet flow field at angles of attack. In the present numerical study, this flow blockage was modeled via a PARC3D computational boundary condition (BC) called the screen BC. The principle formulation of this BC was based on the one-and-half dimension actuator disk theory. This screen BC was applied at the inlet propeller face station of the computational grid. Numerical results were obtained with and without the screen BC. The application of the screen BC in this numerical study provided results which are similar to the results of past experimental efforts in which either the blockage device or the propeller was used.

  7. Directional ratio based on parabolic molecules and its application to the analysis of tubular structures

    NASA Astrophysics Data System (ADS)

    Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos

    2015-09-01

    As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.

  8. Flowfield characterization and model development in detonation tubes

    NASA Astrophysics Data System (ADS)

    Owens, Zachary Clark

    A series of experiments and numerical simulations are performed to advance the understanding of flowfield phenomena and impulse generation in detonation tubes. Experiments employing laser-based velocimetry, high-speed schlieren imaging and pressure measurements are used to construct a dataset against which numerical models can be validated. The numerical modeling culminates in the development of a two-dimensional, multi-species, finite-rate-chemistry, parallel, Navier-Stokes solver. The resulting model is specifically designed to assess unsteady, compressible, reacting flowfields, and its utility for studying multidimensional detonation structure is demonstrated. A reduced, quasi-one-dimensional model with source terms accounting for wall losses is also developed for rapid parametric assessment. Using these experimental and numerical tools, two primary objectives are pursued. The first objective is to gain an understanding of how nozzles affect unsteady, detonation flowfields and how they can be designed to maximize impulse in a detonation based propulsion system called a pulse detonation engine. It is shown that unlike conventional, steady-flow propulsion systems where converging-diverging nozzles generate optimal performance, unsteady detonation tube performance during a single-cycle is maximized using purely diverging nozzles. The second objective is to identify the primary underlying mechanisms that cause velocity and pressure measurements to deviate from idealized theory. An investigation of the influence of non-ideal losses including wall heat transfer, friction and condensation leads to the development of improved models that reconcile long-standing discrepancies between predicted and measured detonation tube performance. It is demonstrated for the first time that wall condensation of water vapor in the combustion products can cause significant deviations from ideal theory.

  9. The life of a meander bend: Connecting shape and dynamics via analysis of a numerical model

    NASA Astrophysics Data System (ADS)

    Schwenk, Jon; Lanzoni, Stefano; Foufoula-Georgiou, Efi

    2015-04-01

    Analysis of bend-scale meandering river dynamics is a problem of theoretical and practical interest. This work introduces a method for extracting and analyzing the history of individual meander bends from inception until cutoff (called "atoms") by tracking backward through time the set of two cutoff nodes in numerical meander migration models. Application of this method to a simplified yet physically based model provides access to previously unavailable bend-scale meander dynamics over long times and at high temporal resolutions. We find that before cutoffs, the intrinsic model dynamics invariably simulate a prototypical cutoff atom shape we dub simple. Once perturbations from cutoffs occur, two other archetypal cutoff planform shapes emerge called long and round that are distinguished by a stretching along their long and perpendicular axes, respectively. Three measures of meander migration—growth rate, average migration rate, and centroid migration rate—are introduced to capture the dynamic lives of individual bends and reveal that similar cutoff atom geometries share similar dynamic histories. Specifically, through the lens of the three shape types, simples are seen to have the highest growth and average migration rates, followed by rounds, and finally longs. Using the maximum average migration rate as a metric describing an atom's dynamic past, we show a strong connection between it and two metrics of cutoff geometry. This result suggests both that early formative dynamics may be inferred from static cutoff planforms and that there exists a critical period early in a meander bend's life when its dynamic trajectory is most sensitive to cutoff perturbations. An example of how these results could be applied to Mississippi River oxbow lakes with unknown historic dynamics is shown. The results characterize the underlying model and provide a framework for comparisons against more complex models and observed dynamics.

  10. Numerical studies of the KP line-solitons

    NASA Astrophysics Data System (ADS)

    Chakravarty, S.; McDowell, T.; Osborne, M.

    2017-03-01

    The Kadomtsev-Petviashvili (KP) equation admits a class of solitary wave solutions localized along distinct rays in the xy-plane, called the line-solitons, which describe the interaction of shallow water waves on a flat surface. These wave interactions have been observed on long, flat beaches, as well as have been recreated in laboratory experiments. In this paper, the line-solitons are investigated via direct numerical simulations of the KP equation, and the interactions of the evolved solitary wave patterns are studied. The objective is to obtain greater insight into solitary wave interactions in shallow water and to determine the extent the KP equation is a good model in describing these nonlinear interactions.

  11. Use of transport models for wildfire behavior simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linn, R.R.; Harlow, F.H.

    1998-01-01

    Investigators have attempted to describe the behavior of wildfires for over fifty years. Current models for numerical description are mainly algebraic and based on statistical or empirical ideas. The authors have developed a transport model called FIRETEC. The use of transport formulations connects the propagation rates to the full conservation equations for energy, momentum, species concentrations, mass, and turbulence. In this paper, highlights of the model formulation and results are described. The goal of the FIRETEC model is to describe most probable average behavior of wildfires in a wide variety of conditions. FIRETEC represents the essence of the combination ofmore » many small-scale processes without resolving each process in complete detail.« less

  12. Investigation of the effects of aeroelastic deformations on the radar cross section of aircraft

    NASA Astrophysics Data System (ADS)

    McKenzie, Samuel D.

    1991-12-01

    The effects of aeroelastic deformations on the radar cross section (RCS) of a T-38 trainer jet and a C-5A transport aircraft are examined and characterized. Realistic representations of structural wing deformations are obtained from a mechanical/computer aided design software package called NASTRAN. NASTRAN is used to evaluate the structural parameters of the aircraft as well as the restraints and loads associated with realistic flight conditions. Geometries for both the non-deformed and deformed airframes are obtained from the NASTRAN models and translated into RCS models. The RCS is analyzed using a numerical modeling code called the Radar Cross Section - Basic Scattering Code, version 2 which was developed at the Ohio State University and is based on the uniform geometric theory of diffraction. The code is used to analyze the effects of aeroelastic deformations on the RCS of the aircraft by comparing the computed RCS representing the deformed airframe to that of the non-deformed airframe and characterizing the differences between them.

  13. An explicit closed-form analytical solution for European options under the CGMY model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Du, Meiyu; Xu, Xiang

    2017-01-01

    In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.

  14. Turbulent Concentration of mm-Size Particles in the Protoplanetary Nebula: Scale-Dependent Cascades

    NASA Technical Reports Server (NTRS)

    Cuzzi, J. N.; Hartlep, T.

    2015-01-01

    The initial accretion of primitive bodies (here, asteroids in particular) from freely-floating nebula particles remains problematic. Traditional growth-by-sticking models encounter a formidable "meter-size barrier" (or even a mm-to-cm-size barrier) in turbulent nebulae, making the preconditions for so-called "streaming instabilities" difficult to achieve even for so-called "lucky" particles. Even if growth by sticking could somehow breach the meter size barrier, turbulent nebulae present further obstacles through the 1-10km size range. On the other hand, nonturbulent nebulae form large asteroids too quickly to explain long spreads in formation times, or the dearth of melted asteroids. Theoretical understanding of nebula turbulence is itself in flux; recent models of MRI (magnetically-driven) turbulence favor low-or- no-turbulence environments, but purely hydrodynamic turbulence is making a comeback, with two recently discovered mechanisms generating robust turbulence which do not rely on magnetic fields at all. An important clue regarding planetesimal formation is an apparent 100km diameter peak in the pre-depletion, pre-erosion mass distribution of asteroids; scenarios leading directly from independent nebula particulates to large objects of this size, which avoid the problematic m-km size range, could be called "leapfrog" scenarios. The leapfrog scenario we have studied in detail involves formation of dense clumps of aerodynamically selected, typically mm-size particles in turbulence, which can under certain conditions shrink inexorably on 100-1000 orbit timescales and form 10-100km diameter sandpile planetesimals. There is evidence that at least the ordinary chondrite parent bodies were initially composed entirely of a homogeneous mix of such particles. Thus, while they are arcane, turbulent concentration models acting directly on chondrule size particles are worthy of deeper study. The typical sizes of planetesimals and the rate of their formation can be estimated using a statistical model with properties inferred from large numerical simulations of turbulence. Nebula turbulence is described by its Reynolds number Re = (L/eta)(exp 4/3), where L = H alpha(exp 1/2) is the largest eddy scale, H is the nebula gas vertical scale height, alpha the turbulent viscosity parameter, and eta is the Kolmogorov or smallest scale in turbulence (typically about 1km), with eddy turnover time t(sub eta). In the nebula, Re is far larger than any numerical simulation can handle, so some physical arguments are needed to extend the results of numerical simulations to nebula conditions. In this paper, we report new physics to be incorporated into our statistical models.

  15. Investigating the impact of surface wave breaking on modeling the trajectories of drifters in the northern Adriatic Sea during a wind-storm event

    USGS Publications Warehouse

    Carniel, S.; Warner, J.C.; Chiggiato, J.; Sclavo, M.

    2009-01-01

    An accurate numerical prediction of the oceanic upper layer velocity is a demanding requirement for many applications at sea and is a function of several near-surface processes that need to be incorporated in a numerical model. Among them, we assess the effects of vertical resolution, different vertical mixing parameterization (the so-called Generic Length Scale -GLS- set of k-??, k-??, gen, and the Mellor-Yamada), and surface roughness values on turbulent kinetic energy (k) injection from breaking waves. First, we modified the GLS turbulence closure formulation in the Regional Ocean Modeling System (ROMS) to incorporate the surface flux of turbulent kinetic energy due to wave breaking. Then, we applied the model to idealized test cases, exploring the sensitivity to the above mentioned factors. Last, the model was applied to a realistic situation in the Adriatic Sea driven by numerical meteorological forcings and river discharges. In this case, numerical drifters were released during an intense episode of Bora winds that occurred in mid-February 2003, and their trajectories compared to the displacement of satellite-tracked drifters deployed during the ADRIA02-03 sea-truth campaign. Results indicted that the inclusion of the wave breaking process helps improve the accuracy of the numerical simulations, subject to an increase in the typical value of the surface roughness z0. Specifically, the best performance was obtained using ??CH = 56,000 in the Charnok formula, the wave breaking parameterization activated, k-?? as the turbulence closure model. With these options, the relative error with respect to the average distance of the drifter was about 25% (5.5 km/day). The most sensitive factors in the model were found to be the value of ??CH enhanced with respect to a standard value, followed by the adoption of wave breaking parameterization and the particular turbulence closure model selected. ?? 2009 Elsevier Ltd.

  16. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  17. Dipole excitation of surface plasmon on a conducting sheet: Finite element approximation and validation

    NASA Astrophysics Data System (ADS)

    Maier, Matthias; Margetis, Dionisios; Luskin, Mitchell

    2017-06-01

    We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon-polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.

  18. The Computerized Anatomical Man (CAM) model

    NASA Technical Reports Server (NTRS)

    Billings, M. P.; Yucker, W. R.

    1973-01-01

    A computerized anatomical man (CAM) model, representing the most detailed and anatomically correct geometrical model of the human body yet prepared, has been developed for use in analyzing radiation dose distribution in man. This model of a 50-percentile standing USAF man comprises some 1100 unique geometric surfaces and some 2450 solid regions. Internal body geometry such as organs, voids, bones, and bone marrow are explicitly modeled. A computer program called CAMERA has also been developed for performing analyses with the model. Such analyses include tracing rays through the CAM geometry, placing results on magnetic tape in various forms, collapsing areal density data from ray tracing information to areal density distributions, preparing cross section views, etc. Numerous computer drawn cross sections through the CAM model are presented.

  19. The CALL-SLA Interface: Insights from a Second-Order Synthesis

    ERIC Educational Resources Information Center

    Plonsky, Luke; Ziegler, Nicole

    2016-01-01

    The relationship between computer-assisted language learning (CALL) and second language acquisition (SLA) has been studied both extensively, covering numerous subdomains, and intensively, resulting in hundreds of primary studies. It is therefore no surprise that CALL researchers, as in other areas of applied linguistics, have turned in recent…

  20. Physical modelling of LNG rollover in a depressurized container filled with water

    NASA Astrophysics Data System (ADS)

    Maksim, Dadonau; Denissenko, Petr; Hubert, Antoine; Dembele, Siaka; Wen, Jennifer

    2015-11-01

    Stable density stratification of multi-component Liquefied Natural Gas causes it to form distinct layers, with upper layer having a higher fraction of the lighter components. Heat flux through the walls and base of the container results in buoyancy-driven convection accompanied by heat and mass transfer between the layers. The equilibration of densities of the top and bottom layers, normally caused by the preferential evaporation of Nitrogen, may induce an imbalance in the system and trigger a rapid mixing process, so-called rollover. Numerical simulation of the rollover is complicated and codes require validation. Physical modelling of the phenomenon has been performed in a water-filled depressurized vessel. Reducing gas pressure in the container to levels comparable to the hydrostatic pressure in the water column allows modelling of tens of meters industrial reservoirs using a 20 cm laboratory setup. Additionally, it allows to model superheating of the base fluid layer at temperatures close the room temperature. Flow visualizations and parametric studies are presented. Results are related to outcomes of numerical modelling.

  1. Digital signal processing based on inverse scattering transform.

    PubMed

    Turitsyna, Elena G; Turitsyn, Sergei K

    2013-10-15

    Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal.

  2. DEM simulation of dendritic grain random packing: application to metal alloy solidification

    NASA Astrophysics Data System (ADS)

    Olmedilla, Antonio; Založnik, Miha; Combeau, Hervé

    2017-06-01

    The random packing of equiaxed dendritic grains in metal-alloy solidification is numerically simulated and validated via an experimental model. This phenomenon is characterized by a driving force which is induced by the solid-liquid density difference. Thereby, the solid dendritic grains, nucleated in the melt, sediment and pack with a relatively low inertia-to-dissipation ratio, which is the so-called Stokes number. The characteristics of the particle packed porous structure such as solid packing fraction affect the final solidified product. A multi-sphere clumping Discrete Element Method (DEM) approach is employed to predict the solid packing fraction as function of the grain geometry under the solidification conditions. Five different monodisperse noncohesive frictionless particle collections are numerically packed by means of a vertical acceleration: a) three dendritic morphologies; b) spheres and c) one ellipsoidal geometry. In order to validate our numerical results with solidification conditions, the sedimentation and packing of two monodisperse collections (spherical and dendritic) is experimentally carried out in a viscous quiescent medium. The hydrodynamic similarity is respected between the actual phenomenon and the experimental model, that is a low Stokes number, o(10-3). In this way, the experimental average solid packing fraction is employed to validate the numerical model. Eventually, the average packing fraction is found to highly depend on the equiaxed dendritic grain sphericity, with looser packings for lower sphericity.

  3. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    NASA Astrophysics Data System (ADS)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  4. Numerical and experimental study of the 3D effect on connecting arm of vertical axis tidal current turbine

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Kang, Hai-gui; Chen, Bing; Xie, Yu; Wang, Yin

    2016-03-01

    Vertical axis tidal current turbine is a promising device to extract energy from ocean current. One of the important components of the turbine is the connecting arm, which can bring about a significant effect on the pressure distribution along the span of the turbine blade, herein we call it 3D effect. However, so far the effect is rarely reported in the research, moreover, in numerical simulation. In the present study, a 3D numerical model of the turbine with the connecting arm was developed by using FLUENT software compiling the UDF (User Defined Function) command. The simulation results show that the pressure distribution along the span of blade with the connecting arm model is significantly different from those without the connecting arm. To facilitate the validation of numerical model, the laboratory experiment has been carried out by using three different types of NACA aerofoil connecting arm and circle section connecting arm. And results show that the turbine with NACA0012 connecting arm has the best start-up performance which is 0.346 m/s and the peak point of power conversion coefficient is around 0.33. A further study has been performed and a conclusion is drawn that the aerofoil and thickness of connecting arm are the most important factors on the power conversion coefficient of the vertical axis tidal current turbine.

  5. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  6. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  7. Numerical and Experimental Studies on Impact Loaded Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo

    2006-07-01

    An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less

  8. Radiation breakage of DNA: a model based on random-walk chromatin structure

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Sachs, R. K.

    2001-01-01

    Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.

  9. Outbreak and Extinction Dynamics in a Stochastic Ebola Model

    NASA Astrophysics Data System (ADS)

    Nieddu, Garrett; Bianco, Simone; Billings, Lora; Forgoston, Eric; Kaufman, James

    A zoonotic disease is a disease that can be passed between animals and humans. In many cases zoonotic diseases can persist in the animal population even if there are no infections in the human population. In this case we call the infected animal population the reservoir for the disease. Ebola virus disease (EVD) and SARS are both notable examples of such diseases. There is little work devoted to understanding stochastic disease extinction and reintroduction in the presence of a reservoir. Here we build a stochastic model for EVD and explicitly consider the presence of an animal reservoir. Using a master equation approach and a WKB ansatz, we determine the associated Hamiltonian of the system. Hamilton's equations are then used to numerically compute the 12-dimensional optimal path to extinction, which is then used to estimate mean extinction times. We also numerically investigate the behavior of the model for dynamic population size. Our results provide an improved understanding of outbreak and extinction dynamics in diseases like EVD.

  10. Non-idealities in the 3ω method for thermal characterization in the low- and high-frequency regimes

    NASA Astrophysics Data System (ADS)

    Jaber, Wassim; Chapuis, Pierre-Olivier

    2018-04-01

    This work is devoted to analytical and numerical studies of diffusive heat conduction in configurations considered in 3ω experiments, which aim at measuring thermal conductivity of materials. The widespread 2D analytical model considers infinite media and translational invariance, a situation which cannot be met in practice in numerous cases due to the constraints in low-dimensional materials and systems. We investigate how thermal boundary resistance between heating wire and sample, native oxide and heating wire shape affect the temperature fields. 3D finite element modelling is also performed to account for the effect of the bonding pads and the 3D heat spreading down to a typical package. Emphasis is given on the low-frequency regime, which is less known than the so-called slope regime. These results will serve as guides for the design of ideal experiments where the 2D model can be applied and for the analyses of non-ideal ones.

  11. Effect of strong disorder on three-dimensional chiral topological insulators: Phase diagrams, maps of the bulk invariant, and existence of topological extended bulk states

    NASA Astrophysics Data System (ADS)

    Song, Juntao; Fine, Carolyn; Prodan, Emil

    2014-11-01

    The effect of strong disorder on chiral-symmetric three-dimensional lattice models is investigated via analytical and numerical methods. The phase diagrams of the models are computed using the noncommutative winding number, as functions of disorder strength and model's parameters. The localized/delocalized characteristic of the quantum states is probed with level statistics analysis. Our study reconfirms the accurate quantization of the noncommutative winding number in the presence of strong disorder, and its effectiveness as a numerical tool. Extended bulk states are detected above and below the Fermi level, which are observed to undergo the so-called "levitation and pair annihilation" process when the system is driven through a topological transition. This suggests that the bulk invariant is carried by these extended states, in stark contrast with the one-dimensional case where the extended states are completely absent and the bulk invariant is carried by the localized states.

  12. Observation of the pressure effect in simulations of droplets splashing on a dry surface

    NASA Astrophysics Data System (ADS)

    Boelens, A. M. P.; Latka, A.; de Pablo, J. J.

    2018-06-01

    At atmospheric pressure, a drop of ethanol impacting on a solid surface produces a splash. Reducing the ambient pressure below its atmospheric value suppresses this splash. The origin of this so-called pressure effect is not well understood, and this study presents an in-depth comparison between various theoretical models that aim to predict splashing and simulations. In this paper, the pressure effect is explored numerically by resolving the Navier-Stokes equations at a 3-nm resolution. In addition to reproducing numerous experimental observations, it is found that different models all provide elements of what is observed in the simulations. The skating droplet model correctly predicts the existence and scaling of a gas film under the droplet, the lamella formation theory is able to correctly predict the scaling of the lamella ejection velocity as a function of the impact velocity for liquids with different viscosity, and lastly, the dewetting theory's hypothesis of a lift force acting on the liquid sheet after ejection is consistent with our results.

  13. A splitting algorithm for a novel regularization of Perona-Malik and application to image restoration

    NASA Astrophysics Data System (ADS)

    Karami, Fahd; Ziad, Lamia; Sadik, Khadija

    2017-12-01

    In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).

  14. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  16. A Conserving Discretization for the Free Boundary in a Two-Dimensional Stefan Problem

    NASA Astrophysics Data System (ADS)

    Segal, Guus; Vuik, Kees; Vermolen, Fred

    1998-03-01

    The dissolution of a disk-likeAl2Cuparticle is considered. A characteristic property is that initially the particle has a nonsmooth boundary. The mathematical model of this dissolution process contains a description of the particle interface, of which the position varies in time. Such a model is called a Stefan problem. It is impossible to obtain an analytical solution for a general two-dimensional Stefan problem, so we use the finite element method to solve this problem numerically. First, we apply a classical moving mesh method. Computations show that after some time steps the predicted particle interface becomes very unrealistic. Therefore, we derive a new method for the displacement of the free boundary based on the balance of atoms. This method leads to good results, also, for nonsmooth boundaries. Some numerical experiments are given for the dissolution of anAl2Cuparticle in anAl-Cualloy.

  17. Geometric stability of topological lattice phases

    PubMed Central

    Jackson, T. S.; Möller, Gunnar; Roy, Rahul

    2015-01-01

    The fractional quantum Hall (FQH) effect illustrates the range of novel phenomena which can arise in a topologically ordered state in the presence of strong interactions. The possibility of realizing FQH-like phases in models with strong lattice effects has attracted intense interest as a more experimentally accessible venue for FQH phenomena which calls for more theoretical attention. Here we investigate the physical relevance of previously derived geometric conditions which quantify deviations from the Landau level physics of the FQHE. We conduct extensive numerical many-body simulations on several lattice models, obtaining new theoretical results in the process, and find remarkable correlation between these conditions and the many-body gap. These results indicate which physical factors are most relevant for the stability of FQH-like phases, a paradigm we refer to as the geometric stability hypothesis, and provide easily implementable guidelines for obtaining robust FQH-like phases in numerical or real-world experiments. PMID:26530311

  18. The puzzling Venusian polar atmospheric structure reproduced by a general circulation model

    PubMed Central

    Ando, Hiroki; Sugimoto, Norihiko; Takagi, Masahiro; Kashimura, Hiroki; Imamura, Takeshi; Matsuda, Yoshihisa

    2016-01-01

    Unlike the polar vortices observed in the Earth, Mars and Titan atmospheres, the observed Venus polar vortex is warmer than the midlatitudes at cloud-top levels (∼65 km). This warm polar vortex is zonally surrounded by a cold latitude band located at ∼60° latitude, which is a unique feature called ‘cold collar' in the Venus atmosphere. Although these structures have been observed in numerous previous observations, the formation mechanism is still unknown. Here we perform numerical simulations of the Venus atmospheric circulation using a general circulation model, and succeed in reproducing these puzzling features in close agreement with the observations. The cold collar and warm polar region are attributed to the residual mean meridional circulation enhanced by the thermal tide. The present results strongly suggest that the thermal tide is crucial for the structure of the Venus upper polar atmosphere at and above cloud levels. PMID:26832195

  19. Design, optimization and numerical modelling of a novel floating pendulum wave energy converter with tide adaptation

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Zhang, Da-hai; Chen, Ying; Liang, Hui; Tan, Ming; Li, Wei; Ma, Xian-dong

    2017-10-01

    A novel floating pendulum wave energy converter (WEC) with the ability of tide adaptation is designed and presented in this paper. Aiming to a high efficiency, the buoy's hydrodynamic shape is optimized by enumeration and comparison. Furthermore, in order to keep the buoy's well-designed leading edge always facing the incoming wave straightly, a novel transmission mechanism is then adopted, which is called the tidal adaptation mechanism in this paper. Time domain numerical models of a floating pendulum WEC with or without tide adaptation mechanism are built to compare their performance on various water levels. When comparing these two WECs in terms of their average output based on the linear passive control strategy, the output power of WEC with the tide adaptation mechanism is much steadier with the change of the water level and always larger than that without the tide adaptation mechanism.

  20. Survivability of Deterministic Dynamical Systems

    PubMed Central

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-01-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955

  1. The HIRLAM fast radiation scheme for mesoscale numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Rontu, Laura; Gleeson, Emily; Räisänen, Petri; Pagh Nielsen, Kristian; Savijärvi, Hannu; Hansen Sass, Bent

    2017-07-01

    This paper provides an overview of the HLRADIA shortwave (SW) and longwave (LW) broadband radiation schemes used in the HIRLAM numerical weather prediction (NWP) model and available in the HARMONIE-AROME mesoscale NWP model. The advantage of broadband, over spectral, schemes is that they can be called more frequently within the model, without compromising on computational efficiency. In mesoscale models fast interactions between clouds and radiation and the surface and radiation can be of greater importance than accounting for the spectral details of clear-sky radiation; thus calling the routines more frequently can be of greater benefit than the deterioration due to loss of spectral details. Fast but physically based radiation parametrizations are expected to be valuable for high-resolution ensemble forecasting, because as well as the speed of their execution, they may provide realistic physical perturbations. Results from single-column diagnostic experiments based on CIRC benchmark cases and an evaluation of 10 years of radiation output from the FMI operational archive of HIRLAM forecasts indicate that HLRADIA performs sufficiently well with respect to the clear-sky downwelling SW and longwave LW fluxes at the surface. In general, HLRADIA tends to overestimate surface fluxes, with the exception of LW fluxes under cold and dry conditions. The most obvious overestimation of the surface SW flux was seen in the cloudy cases in the 10-year comparison; this bias may be related to using a cloud inhomogeneity correction, which was too large. According to the CIRC comparisons, the outgoing LW and SW fluxes at the top of atmosphere are mostly overestimated by HLRADIA and the net LW flux is underestimated above clouds. The absorption of SW radiation by the atmosphere seems to be underestimated and LW absorption seems to be overestimated. Despite these issues, the overall results are satisfying and work on the improvement of HLRADIA for the use in HARMONIE-AROME NWP system is ongoing. In a HARMONIE-AROME 3-D forecast experiment we have shown that the frequency of the call for the radiation parametrization and choice of the parametrization scheme makes a difference to the surface radiation fluxes and changes the spatial distribution of the vertically integrated cloud cover and precipitation.

  2. Investigation of heat transfer and material flow of P-FSSW: Experimental and numerical study

    NASA Astrophysics Data System (ADS)

    Rezazadeh, Niki; Mosavizadeh, Seyed Mostafa; Azizi, Hamed

    2018-02-01

    Friction stir spot welding (FSSW) is the joining process which utilizes a rotating tool consisting of a shoulder and/or a probe. In this study, the novel method of FSSW, which is called protrusion friction stir spot welding (P-FSSW), has been presented and effect of shoulder diameter parameter has been studied numerically and experimentally on the weld quality including temperature field, velocity contour, material flow, bonding length, and the depth of the stirred area. The results show that the numerical findings are in good agreement with experimental measurements. The present model could well predict the temperature distribution, velocity contour, depth of the stirred area, and the bonding length. As the shoulder diameter increases, the amount of temperature rises which leads to a rise in stirred area depth, bonding length and temperatures and velocities. Therefore, a weld of higher quality will be performed.

  3. Pseudochaotic dynamics near global periodicity

    NASA Astrophysics Data System (ADS)

    Fan, Rong; Zaslavsky, George M.

    2007-09-01

    In this paper, we study a piecewise linear version of kicked oscillator model: saw-tooth map. A special case of global periodicity, in which every phase point belongs to a periodic orbit, is presented. With few analytic results known for the corresponding map on torus, we numerically investigate transport properties and statistical behavior of Poincaré recurrence time in two cases of deviation from global periodicity. A non-KAM behavior of the system, as well as subdiffusion and superdiffusion, are observed through numerical simulations. Statistics of Poincaré recurrences shows Kac lemma is valid in the system and there is a relation between the transport exponent and the Poincaré recurrence exponent. We also perform careful numerical computation of capacity, information and correlation dimensions of the so-called exceptional set in both cases. Our results show that the fractal dimension of the exceptional set is strictly less than 2 and that the fractal structures are unifractal rather than multifractal.

  4. The contributions of Lewis Fry Richardson to drainage theory, soil physics, and the soil-plant-atmosphere continuum

    NASA Astrophysics Data System (ADS)

    Knight, John; Raats, Peter

    2016-04-01

    The EGU Division on Nonlinear Processes in Geophysics awards the Lewis Fry Richardson Medal. Richardson's significance is highlighted in http://www.egu.eu/awards-medals/portrait-lewis-fry-richardson/, but his contributions to soil physics and to numerical solutions of heat and diffusion equations are not mentioned. We would like to draw attention to those little known contributions. Lewis Fry Richardson (1881-1953) made important contributions to many fields including numerical weather prediction, finite difference solutions of partial differential equations, turbulent flow and diffusion, fractals, quantitative psychology and studies of conflict. He invented numerical weather prediction during World War I, although his methods were not successfully applied until 1950, after the invention of fast digital computers. In 1922 he published the book `Numerical weather prediction', of which few copies were sold and even fewer were read until the 1950s. To model heat and mass transfer in the atmosphere, he did much original work on turbulent flow and defined what is now known as the Richardson number. His technique for improving the convergence of a finite difference calculation is known as Richardson extrapolation, and was used by John Philip in his 1957 semi-analytical solution of the Richards equation for water movement in unsaturated soil. Richardson's first papers in 1908 concerned the numerical solution of the free surface problem of unconfined flow of water in saturated soil, arising in the design of drain spacing in peat. Later, for the lower boundary of his atmospheric model he needed to understand the movement of heat, liquid water and water vapor in what is now called the vadose zone and the soil plant atmosphere system, and to model coupled transfer of heat and flow of water in unsaturated soil. Finding little previous work, he formulated partial differential equations for transient, vertical flow of liquid water and for transfer of heat and water vapor. He paid considerable attention to the balances of water and energy at the soil-atmosphere and plant-atmosphere interfaces, making use of the concept of transfer resistance introduced by Brown and Escombe (1900) for leaf-atmosphere interfaces. He incorporated finite difference versions of all equations into his numerical weather forecasting model. From 1916, Richardson drove an ambulance in France in World War I, did weather computations in his spare time, and wrote a draft of his book. Later researchers such as L.A. Richards, D.A. de Vries and J.R. Philip from the 1930s to the 1950s were unaware that Richardson had anticipated many of their ideas on soil liquid water, heat, water vapor, and the soil-plant-atmosphere system. The Richards (1931) equation could rightly be called the Richardson (1922) equation! Richardson (1910) developed what we now call the Crank Nicolson implicit method for the heat or diffusion equation. To save effort, he used an explicit three level method after the first time step. Crank and Nicolson (1947) pointed out the instability in the explicit method, and used his implicit method for all time steps. Hanks and Bowers (1962) adapted the Crank Nicolson method to solve the Richards equation. So we could say that Hanks and Bowers used the Richardson finite difference method to solve the Richardson equation for soil water flow!

  5. Emergence of bursts and communities in evolving weighted networks.

    PubMed

    Jo, Hang-Hyun; Pan, Raj Kumar; Kaski, Kimmo

    2011-01-01

    Understanding the patterns of human dynamics and social interaction and the way they lead to the formation of an organized and functional society are important issues especially for techno-social development. Addressing these issues of social networks has recently become possible through large scale data analysis of mobile phone call records, which has revealed the existence of modular or community structure with many links between nodes of the same community and relatively few links between nodes of different communities. The weights of links, e.g., the number of calls between two users, and the network topology are found correlated such that intra-community links are stronger compared to the weak inter-community links. This feature is known as Granovetter's "The strength of weak ties" hypothesis. In addition to this inhomogeneous community structure, the temporal patterns of human dynamics turn out to be inhomogeneous or bursty, characterized by the heavy tailed distribution of time interval between two consecutive events, i.e., inter-event time. In this paper, we study how the community structure and the bursty dynamics emerge together in a simple evolving weighted network model. The principal mechanisms behind these patterns are social interaction by cyclic closure, i.e., links to friends of friends and the focal closure, links to individuals sharing similar attributes or interests, and human dynamics by task handling process. These three mechanisms have been implemented as a network model with local attachment, global attachment, and priority-based queuing processes. By comprehensive numerical simulations we show that the interplay of these mechanisms leads to the emergence of heavy tailed inter-event time distribution and the evolution of Granovetter-type community structure. Moreover, the numerical results are found to be in qualitative agreement with empirical analysis results from mobile phone call dataset.

  6. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  7. Hybrid modeling of spatial continuity for application to numerical inverse problems

    USGS Publications Warehouse

    Friedel, Michael J.; Iwashita, Fabio

    2013-01-01

    A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.

  8. Vocal behavior and risk assessment in wild chimpanzees

    NASA Astrophysics Data System (ADS)

    Wilson, Michael L.; Hauser, Marc D.; Wrangham, Richard W.

    2005-09-01

    If, as theory predicts, animal communication is designed to manipulate the behavior of others to personal advantage, then there will be certain contexts in which vocal behavior is profitable and other cases where silence is favored. Studies conducted in Kibale National Park, Uganda investigated whether chimpanzees modified their vocal behavior according to different levels of risk from intergroup aggression, including relative numerical strength and location in range. Playback experiments tested numerical assessment, and observations of chimpanzees throughout their range tested whether they called less frequently to avoid detection in border areas. Chimpanzees were more likely to call to playback of a stranger's call if they greatly outnumbered the stranger. Chimpanzees tended to reduce calling in border areas, but not in all locations. Chimpanzees most consistently remained silent when raiding crops: they almost never gave loud pant-hoot calls when raiding banana plantations outside the park, even though they normally give many pant-hoots on arrival at high-quality food resources. These findings indicate that chimpanzees have the capacity to reduce loud call production when appropriate, but that additional factors, such as advertising territory ownership, contribute to the costs and benefits of calling in border zones.

  9. A dynamic spar numerical model for passive shape change

    NASA Astrophysics Data System (ADS)

    Calogero, J. P.; Frecker, M. I.; Hasnain, Z.; Hubbard, J. E., Jr.

    2016-10-01

    A three-dimensional constraint-driven dynamic rigid-link numerical model of a flapping wing structure with compliant joints (CJs) called the dynamic spar numerical model is introduced and implemented. CJs are modeled as spherical joints with distributed mass and spring-dampers with coupled nonlinear spring and damping coefficients, which models compliant mechanisms spatially distributed in the structure while greatly reducing computation time compared to a finite element model. The constraints are established, followed by the formulation of a state model used in conjunction with a forward time integrator, an experiment to verify a rigid-link assumption and determine a flapping angle function, and finally several example runs. Modeling the CJs as coupled bi-linear springs shows the wing is able to flex more during upstroke than downstroke. Coupling the spring stiffnesses allows an angular deformation about one axis to induce an angular deformation about another axis, where the magnitude is proportional to the coupling term. Modeling both the leading edge and diagonal spars shows that the diagonal spar changes the kinematics of the leading edge spar verses only considering the leading edge spar, causing much larger axial rotations in the leading edge spar. The kinematics are very sensitive to CJ location, where moving the CJ toward the wing root causes a stronger response, and adding multiple CJs on the leading edge spar with a CJ on the diagonal spar allows the wing to deform with larger magnitude in all directions. This model lays a framework for a tool which can be used to understand flapping wing flight.

  10. Tidal Debris from High-Velocity Collisions as Fake Dark Galaxies: A Numerical Model of VIRGOHI 21

    NASA Astrophysics Data System (ADS)

    Duc, Pierre-Alain; Bournaud, Frederic

    2008-02-01

    High-speed collisions, although current in clusters of galaxies, have long been neglected, as they are believed to cause little damages to galaxies except when they are repeated, a process called "harassment." In fact, they are able to produce faint but extended gaseous tails. Such low-mass, starless, tidal debris may become detached and appear as free-floating clouds in the very deep H I surveys that are currently being carried out. We show in this paper that these debris possess the same apparent properties as the so-called dark galaxies, objects originally detected in H I, with no optical counterpart, and presumably dark matter-dominated. We present a numerical model of the prototype of such dark galaxies—VIRGOHI 21—that is able to reproduce its main characteristics: the one-sided tail linking it to the spiral galaxy NGC 4254, the absence of stars, and above all the reversal of the velocity gradient along the tail originally attributed to rotation motions caused by a massive dark matter halo, which we find to be consistent with simple streaming motions plus projection effects. According to our numerical simulations, this tidal debris was expelled 750 Myr ago during a flyby at 1100 km s-1 of NGC 4254 by a massive companion that should now lie at a projected distance of about 400 kpc. A candidate for the intruder is discussed. The existence of galaxies that have never been able to form stars had already been challenged on the basis of theoretical and observational grounds. Tidal collisions, in particular those occurring at high speed, provide a much more simple explanation for the origin of such putative dark galaxies.

  11. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  12. Automated procedures for sizing aerospace vehicle structures /SAVES/

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Blackburn, C. L.; Dixon, S. C.

    1972-01-01

    Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.

  13. Global linear gyrokinetic particle-in-cell simulations including electromagnetic effects in shaped plasmas

    NASA Astrophysics Data System (ADS)

    Mishchenko, A.; Borchardt, M.; Cole, M.; Hatzky, R.; Fehér, T.; Kleiber, R.; Könies, A.; Zocco, A.

    2015-05-01

    We give an overview of recent developments in electromagnetic simulations based on the gyrokinetic particle-in-cell codes GYGLES and EUTERPE. We present the gyrokinetic electromagnetic models implemented in the codes and discuss further improvements of the numerical algorithm, in particular the so-called pullback mitigation of the cancellation problem. The improved algorithm is employed to simulate linear electromagnetic instabilities in shaped tokamak and stellarator plasmas, which was previously impossible for the parameters considered.

  14. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  15. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    PubMed

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  16. The Neutral Islands during the Late Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Xu, Yidong; Yue, Bin; Chen, Xuelei

    2018-05-01

    The large-scale structure of the ionization field during the epoch of reionization (EoR) can be modeled by the excursion set theory. While the growth of ionized regions during the early stage are described by the ``bubble model'', the shrinking process of neutral regions after the percolation of the ionized region calls for an ``island model''. An excursion set based analytical model and a semi-numerical code (islandFAST) have been developed. The ionizing background and the bubbles inside the islands are also included in the treatment. With two kinds of absorbers of ionizing photons, i.e. the large-scale under-dense neutral islands and the small-scale over-dense clumps, the ionizing background are self-consistently evolved in the model.

  17. The Simple Metals and New Models of the Interacting-Electron-Gas Type: I. Anomalous Plasmon Dispersion Relations in Heavy Alkali Metals

    NASA Astrophysics Data System (ADS)

    Okuda, Takashi; Horio, Kohji; Ohmura, Yoshihiro; Mizuno, Yukio

    2018-06-01

    The well-known interacting-electron-gas model of metallic states is modified by replacing the Coulomb interaction by a truncated one to weaken the repulsive force between electrons at short distances. The new model is applied to the so-called simple metals and is found far superior to the old one. Most of the calculations are carried out successfully on the basis of the random-phase-approximation (RPA), which is known much too poor for the old familiar model. In the present paper the numerical value of the new parameter peculiar to the new model is determined systematically with the help of the observed plasmon spectrum for each metal.

  18. Minimizing Higgs potentials via numerical polynomial homotopy continuation

    NASA Astrophysics Data System (ADS)

    Maniatis, M.; Mehta, D.

    2012-08-01

    The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.

  19. A numerical tool for reproducing driver behaviour: experiments and predictive simulations.

    PubMed

    Casucci, M; Marchitto, M; Cacciabue, P C

    2010-03-01

    This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.

  20. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.

    2017-07-01

    The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the underlying physics and the testing of parameter sensitivity. A comparison of the model results to measurements shows a good qualitative agreement. iFlow is written in Python and is available as open source code under the LGPL license.

  1. Communicating and Interacting: An Exploration of the Changing Roles of Media in CALL/CMC

    ERIC Educational Resources Information Center

    Hoven, Debra

    2006-01-01

    The sites of learning and teaching using CALL are shifting from CD-based, LAN-based, or stand-alone programs to the Internet. As this change occurs, pedagogical approaches to using CALL are also shifting to forms which better exploit the communication, collaboration, and negotiation aspects of the Internet. Numerous teachers and designers have…

  2. Interactive Modelling of Salinity Intrusion in the Rhine-Meuse Delta

    NASA Astrophysics Data System (ADS)

    Baart, F.; Kranenburg, W.; Luijendijk, A.

    2015-12-01

    In many delta's of the world salinity intrusion imposes limits to fresh water availability. With increasing population and industry, the need for fresh water increases. But also salinity intrusion is expected to increase due to changes in river discharge, sea level and storm characteristics. In the Rhine-Meuse delta salt intrusion is impacted by human activities as well, like deepening of waterways and opening of delta-branches closed earlier. All these developments call for increasing the understanding of the system, but also for means for policy makers, coastal planners and engineers to assess effects of changes and to explore and design measures. In our presentation we present the developments in interactive modelling of salinity intrusion in the Rhine-Meuse delta. In traditional process-based numerical modelling, impacts are investigated by researchers and engineers by following the steps of pre-defining scenario's, running the model and post-processing the results. Interactive modelling lets users adjust simulations while running. Users can for instance change river discharges or bed levels, and can add measures like changes to geometry. The model will take the adjustments into account immediately, and will directly compute the effect. In this way, a tool becomes available with which coastal planners, policy makers and engineers together can develop and evaluate ideas and designs by interacting with the numerical model. When developing interactive numerical engines, one of the challenges is to optimize the exchange of variables as e.g. salt concentration. In our case we exchange variables on a 3D grid every time step. For this, the numerical model adheres to the Basic Model Interface (http://csdms.colorado.edu/wiki), which allows external control and the exchange of variables through pointers while the model is running. In our presentation we further explain our method and show examples of interactive design of salinity intrusion measures in the Rhine-Meuse delta.

  3. Numerical approach to constructing the lunar physical libration: results of the initial stage

    NASA Astrophysics Data System (ADS)

    Zagidullin, A.; Petrova, N.; Nefediev, Yu.; Usanin, V.; Glushkov, M.

    2015-10-01

    So called "main problem" it is taken as a model to develop the numerical approach in the theory of lunar physical libration. For the chosen model, there are both a good methodological basis and results obtained at the Kazan University as an outcome of the analytic theory construction. Results of the first stage in numerical approach are presented in this report. Three main limitation are taken to describe the main problem: -independent consideration of orbital and rotational motion of the Moon; - a rigid body model for the lunar body is taken and its dynamical figure is described by inertia ellipsoid, which gives us the mass distribution inside the Moon. - only gravitational interaction with the Earth and the Sun is considered. Development of selenopotential is limited on this stage by the second harmonic only. Inclusion of the 3-rd and 4-th order harmonics is the nearest task for the next stage.The full solution of libration problem consists of removing the below specified limitations: consideration of the fine effects, caused by planet perturbations, by visco-elastic properties of the lunar body, by the presence of a two-layer lunar core, by the Earth obliquity, by ecliptic rotation, if it is taken as a reference plane.

  4. Igpet software for modeling igneous processes: examples of application using the open educational version

    NASA Astrophysics Data System (ADS)

    Carr, Michael J.; Gazel, Esteban

    2017-04-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  5. Two improvements on numerical simulation of 2-DOF vortex-induced vibration with low mass ratio

    NASA Astrophysics Data System (ADS)

    Kang, Zhuang; Ni, Wen-chi; Zhang, Xu; Sun, Li-ping

    2017-12-01

    Till now, there have been lots of researches on numerical simulation of vortex-induced vibration. Acceptable results have been obtained for fixed cylinders with low Reynolds number. However, for responses of 2-DOF vortex-induced vibration with low mass ratio, the accuracy is not satisfactory, especially for the maximum amplitudes. In Jauvtis and Williamson's work, the maximum amplitude of the cylinder with low mass ratio m*=2.6 can reach as large as 1.5 D to be called as the "super-upper branch", but from current literatures, few simulation results can achieve such value, even fail to capture the upper branch. Besides, it is found that the amplitude decays too fast in the lower branch with the RANS-based turbulence model. The reason is likely to be the defects of the turbulence model itself in the prediction of unsteady separated flows as well as the unreasonable setting of the numerical simulation parameters. Aiming at above issues, a modified turbulence model is proposed in this paper, and the effect of the acceleration of flow field on the response of vortex-induced vibration is studied based on OpenFOAM. By analyzing the responses of amplitude, phase and trajectory, frequency and vortex mode, it is proved that the vortex-induced vibration can be predicted accurately with the modified turbulence model under appropriate flow field acceleration.

  6. Fast model updating coupling Bayesian inference and PGD model reduction

    NASA Astrophysics Data System (ADS)

    Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic

    2018-04-01

    The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.

  7. The challenges of numerically simulating analogue brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne; Ellis, Susan

    2017-04-01

    Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13

  8. Numerical simulation of flood inundation using a well-balanced kinetic scheme for the shallow water equations with bulk recharge and discharge

    NASA Astrophysics Data System (ADS)

    Ersoy, Mehmet; Lakkis, Omar; Townsend, Philip

    2016-04-01

    The flow of water in rivers and oceans can, under general assumptions, be efficiently modelled using Saint-Venant's shallow water system of equations (SWE). SWE is a hyperbolic system of conservation laws (HSCL) which can be derived from a starting point of incompressible Navier-Stokes. A common difficulty in the numerical simulation of HSCLs is the conservation of physical entropy. Work by Audusse, Bristeau, Perthame (2000) and Perthame, Simeoni (2001), proposed numerical SWE solvers known as kinetic schemes (KSs), which can be shown to have desirable entropy-consistent properties, and are thus called well-balanced schemes. A KS is derived from kinetic equations that can be integrated into the SWE. In flood risk assessment models the SWE must be coupled with other equations describing interacting meteorological and hydrogeological phenomena such as rain and groundwater flows. The SWE must therefore be appropriately modified to accommodate source and sink terms, so kinetic schemes are no longer valid. While modifications of SWE in this direction have been recently proposed, e.g., Delestre (2010), we depart from the extant literature by proposing a novel model that is "entropy-consistent" and naturally extends the SWE by respecting its kinetic formulation connections. This allows us to derive a system of partial differential equations modelling flow of a one-dimensional river with both a precipitation term and a groundwater flow model to account for potential infiltration and recharge. We exhibit numerical simulations of the corresponding kinetic schemes. These simulations can be applied to both real world flood prediction and the tackling of wider issues on how climate and societal change are affecting flood risk.

  9. Development of the mathematical model for design and verification of acoustic modal analysis methods

    NASA Astrophysics Data System (ADS)

    Siner, Alexander; Startseva, Maria

    2016-10-01

    To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.

  10. On the mathematical analysis of Ebola hemorrhagic fever: deathly infection disease in West African countries.

    PubMed

    Atangana, Abdon; Goufo, Emile Franc Doungmo

    2014-01-01

    For a given West African country, we constructed a model describing the spread of the deathly disease called Ebola hemorrhagic fever. The model was first constructed using the classical derivative and then converted to the generalized version using the beta-derivative. We studied in detail the endemic equilibrium points and provided the Eigen values associated using the Jacobian method. We furthered our investigation by solving the model numerically using an iteration method. The simulations were done in terms of time and beta. The study showed that, for small portion of infected individuals, the whole country could die out in a very short period of time in case there is not good prevention.

  11. Terahertz frequency superconductor-nanocomposite photonic band gap

    NASA Astrophysics Data System (ADS)

    Elsayed, Hussein A.; Aly, Arafa H.

    2018-02-01

    In the present work, we discuss the transmittance properties of one-dimensional (1D) superconductor nanocomposite photonic crystals (PCs) in THz frequency regions. Our modeling is essentially based on the two-fluid model, Maxwell-Garnett model and the characteristic matrix method. The numerical results investigate the appearance of the so-called cutoff frequency. We have obtained the significant effect of some parameters such as the volume fraction, the permittivity of the host material, the size of the nanoparticles and the permittivity of the superconductor material on the properties of the cutoff frequency. The present results may be useful in the optical communications and photonic applications to act as tunable antenna in THz, reflectors and high-pass filter.

  12. Integration of multi-objective structural optimization into cementless hip prosthesis design: Improved Austin-Moore model.

    PubMed

    Kharmanda, G

    2016-11-01

    A new strategy of multi-objective structural optimization is integrated into Austin-Moore prosthesis in order to improve its performance. The new resulting model is so-called Improved Austin-Moore. The topology optimization is considered as a conceptual design stage to sketch several kinds of hollow stems according to the daily loading cases. The shape optimization presents the detailed design stage considering several objectives. Here, A new multiplicative formulation is proposed as a performance scale in order to define the best compromise between several requirements. Numerical applications on 2D and 3D problems are carried out to show the advantages of the proposed model.

  13. Thermal measurement of brake pad lining surfaces during the braking process

    NASA Astrophysics Data System (ADS)

    Piątkowski, Tadeusz; Polakowski, Henryk; Kastek, Mariusz; Baranowski, Pawel; Damaziak, Krzysztof; Małachowski, Jerzy; Mazurkiewicz, Łukasz

    2012-06-01

    This paper presents the test campaign concept and definition and the analysis of the recorded measurements. One of the most important systems in cars and trucks are brakes. The braking temperature on a lining surface can rise above 500°C. This shows how linings requirements are so strict and, what is more, continuously rising. Besides experimental tests, very supportive method for investigating processes which occur on the brake pad linings are numerical analyses. Experimental tests were conducted on the test machine called IL-68. The main component of IL-68 is so called frictional unit, which consists of: rotational head, which convey a shaft torque and where counter samples are placed and translational head, where samples of coatings are placed and pressed against counter samples. Due to the high rotational speeds and thus the rapid changes in temperature field, the infrared camera was used for testing. The paper presents results of analysis registered thermograms during the tests with different conditions. Furthermore, based on this testing machine, the numerical model was developed. In order to avoid resource demanding analyses only the frictional unit (described above) was taken into consideration. Firstly the geometrical model was performed thanks to CAD techniques, which in the next stage was a base for developing the finite element model. Material properties and boundary conditions exactly correspond to experimental tests. Computations were performed using a dynamic LS-Dyna code where heat generation was estimated assuming full (100%) conversion of mechanical work done by friction forces. Paper presents the results of dynamic thermomechanical analysis too and these results were compared with laboratory tests.

  14. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  15. Understanding bistability in yeast glycolysis using general properties of metabolic pathways.

    PubMed

    Planqué, Robert; Bruggeman, Frank J; Teusink, Bas; Hulshof, Josephus

    2014-09-01

    Glycolysis is the central pathway in energy metabolism in the majority of organisms. In a recent paper, van Heerden et al. showed experimentally and computationally that glycolysis can exist in two states, a global steady state and a so-called imbalanced state. In the imbalanced state, intermediary metabolites accumulate at low levels of ATP and inorganic phosphate. It was shown that Baker's yeast uses a peculiar regulatory mechanism--via trehalose metabolism--to ensure that most yeast cells reach the steady state and not the imbalanced state. Here we explore the apparent bistable behaviour in a core model of glycolysis that is based on a well-established detailed model, and study in great detail the bifurcation behaviour of solutions, without using any numerical information on parameter values. We uncover a rich suite of solutions, including so-called imbalanced states, bistability, and oscillatory behaviour. The techniques employed are generic, directly suitable for a wide class of biochemical pathways, and could lead to better analytical treatments of more detailed models. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  17. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  18. Large-Eddy Simulations of Dust Devils and Convective Vortices

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Barth, Erika; Gu, Zhaolin; Hoffmann, Fabian; Ito, Junshi; Jemmett-Smith, Bradley; Klose, Martina; Nishizawa, Seiya; Raasch, Siegfried; Rafkin, Scot; Takemi, Tetsuya; Tyler, Daniel; Wei, Wei

    2016-11-01

    In this review, we address the use of numerical computations called Large-Eddy Simulations (LES) to study dust devils, and the more general class of atmospheric phenomena they belong to (convective vortices). We describe the main elements of the LES methodology. We review the properties, statistics, and variability of dust devils and convective vortices resolved by LES in both terrestrial and Martian environments. The current challenges faced by modelers using LES for dust devils are also discussed in detail.

  19. Simulating the electrohydrodynamics of a viscous droplet

    NASA Astrophysics Data System (ADS)

    Theillard, Maxime; Saintillan, David

    2016-11-01

    We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.

  20. Modeling the Kinetics of Root Gravireaction

    NASA Astrophysics Data System (ADS)

    Kondrachuk, Alexander V.; Starkov, Vyacheslav N.

    2011-02-01

    The known "sun-flower equation" (SFE), which was originally proposed to model root circumnutating, was used to describe the simplest tip root graviresponse. Two forms of the SFE (integro-differential and differential-delayed) were solved, analyzed and compared with each other. The numerical solutions of these equations were found to be matching with arbitrary accuracy. The analysis of the solutions focused on time-lag effects on the kinetics of tip root bending. The results of the modeling are in good correlation with an experiment at the initial stages of root tips graviresponse. Further development of the model calls for its systematic comparison with some specially designed experiments, which would include measuring the kinetics of root tip bending before gravistimulation over the period of time longer than the time lag.

  1. Computational modelling of cosmic rays in the neighbourhood of the Sun

    NASA Astrophysics Data System (ADS)

    Potgieter, M. S.; Strauss, R. D.

    2017-10-01

    The heliosphere is defned as the plasmatic inuence sphere of the Sun and stretches far beyond the solar system. Cosmic rays, as charged particles with energy between about 1 MeV and millions of GeV, arriving from our own Galaxy and beyond, penetrate the heliosphere and encounter the solar wind and embedded magnetic feld so that when observed they contain useful information about the basic features of the heliosphere. In order to interpret these observations, obtained on and near the Earth and farther away by several space missions, and to gain understanding of the underlying physics, called heliophysics, we need to simulate the heliosphere and the acceleration, propagation and transport of these astroparticles with numerical models. These types of models vary from magnetohydrodynamic based approaches for simulating the heliosphere to using standard fnite-difference numerical schemes to solve transport-type partial differential equations with varying complexity. A large number of these models have been developed locally to do internationally competitive research and have become as such an important training tool for human capacity development in computational physics in South Africa. How these models are applied to various aspects of heliospheric space physics, with illustrative examples, is discussed in this overview.

  2. Courant number and unsteady flow computation

    USGS Publications Warehouse

    Lai, Chintu; ,

    1993-01-01

    The Courant number C, the key to unsteady flow computation, is a ratio of physical wave velocity, ??, to computational signal-transmission velocity, ??, i.e., C = ??/??. In this way, it uniquely relates a physical quantity to a mathematical quantity. Because most unsteady open-channel flows are describable by a set of n characteristic equations along n characteristic paths, each represented by velocity ??i, i = 1,2,....,n, there exist as many as n components for the numerator of C. To develop a numerical model, a numerical integration must be made on each characteristic curve from an earlier point to a later point on the curve. Different numerical methods are available in unsteady flow computation due to the different paths along which the numerical integration is actually performed. For the denominator of C, the ?? defined as ?? = ?? 0 = ??x/??t has been customarily used; thus, the Courant number has the familiar form of C?? = ??/??0. This form will be referred to as ???common Courant number??? in this paper. The commonly used numerical criteria C?? for stability, neutral stability and instability, are imprecise or not universal in the sense that r0 does not always reflect the true maximum computational data-transmission speed of the scheme at hand, i.e., Ctau is no indication for the Courant constraint. In view of this , a new Courant number, called the ???natural Courant number???, Cn, that truly reflects the Courant constraint, has been defined. However, considering the numerous advantages inherent in the traditional C??, a useful and meaningful composite Courant number, denoted by C??* has been formulated from C??. It is hoped that the new aspects of the Courant number discussed herein afford the hydraulician a broader perspective, consistent criteria, and unified guidelines, with which to model various unsteady flows.

  3. Modeling and Analysis of Hybrid Cellular/WLAN Systems with Integrated Service-Based Vertical Handoff Schemes

    NASA Astrophysics Data System (ADS)

    Xia, Weiwei; Shen, Lianfeng

    We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.

  4. Fuzzy Modal Control Applied to Smart Composite Structure

    NASA Astrophysics Data System (ADS)

    Koroishi, E. H.; Faria, A. W.; Lara-Molina, F. A.; Steffen, V., Jr.

    2015-07-01

    This paper proposes an active vibration control technique, which is based on Fuzzy Modal Control, as applied to a piezoelectric actuator bonded to a composite structure forming a so-called smart composite structure. Fuzzy Modal Controllers were found to be well adapted for controlling structures with nonlinear behavior, whose characteristics change considerably with respect to time. The smart composite structure was modelled by using a so called mixed theory. This theory uses a single equivalent layer for the discretization of the mechanical displacement field and a layerwise representation of the electrical field. Temperature effects are neglected. Due to numerical reasons it was necessary to reduce the size of the model of the smart composite structure so that the design of the controllers and the estimator could be performed. The role of the Kalman Estimator in the present contribution is to estimate the modal states of the system, which are used by the Fuzzy Modal controllers. Simulation results illustrate the effectiveness of the proposed vibration control methodology for composite structures.

  5. HELIOGate, a Portal for the Heliophysics Community

    NASA Astrophysics Data System (ADS)

    Pierantoni; Gabriele; Carley, Eoin

    2014-10-01

    Heliophysics is the branch of physics that investigates the interactions between the Sun and the other bodies of the solar system. Heliophysicists rely on data collected from numerous sources scattered across the Solar System. The data collected from these sources is processed to extract metadata and the metadata extracted in this fashion is then used to build indexes of features and events called catalogues. Heliophysicists also develop conceptual and mathematical models of the phenomena and the environment of the Solar System. More specifically, they investigate the physical characteristics of the phenomena and they simulate how they propagate throughout the Solar System with mathematical and physical abstractions called propagation models. HELIOGate aims at addressing the need to combine and orchestrate existing web services in a flexible and easily configurable fashion to tackle different scientific questions. HELIOGate also offers a tool capable of connecting to size! able computation and storage infrastructures to execute data processing codes that are needed to calibrate raw data and to extract metadata.

  6. Numerical Analysis of AHSS Fracture in a Stretch-bending Test

    NASA Astrophysics Data System (ADS)

    Luo, Meng; Chen, Xiaoming; Shi, Ming F.; Shih, Hua-Chu

    2010-06-01

    Advanced High Strength Steels (AHSS) are increasingly used in the automotive industry due to their superior strength and substantial weight reduction advantage. However, their limited ductility gives rise to numerous manufacturing issues. One of them is the so-called `shear fracture' often observed on tight radii during stamping processes. Since traditional approaches, such as the Forming Limit Diagram (FLD), are unable to predict this type of fracture, efforts have been made to develop failure criteria that can predict shear fractures. In this paper, a recently developed Modified Mohr-Coulomb (MMC) ductile fracture criterion[1] is adopted to analyze the failure behavior of a Dual Phase (DP) steel sheet during stretch bending operations. The plasticity and ductile fracture of the present sheet are fully characterized by the Hill'48 orthotropic model and the MMC fracture model respectively. Finite Element models with three different element types (3D, shell and plane strain) were built for a Stretch Forming Simulator (SFS) test and numerical simulations with four different R/t ratios (die radius normalized by sheet thickness) were performed. It has been shown that the 3D and shell element models can accurately predict the failure location/mode, the upper die load-displacement responses as well as the wall stress and wrap angle at the onset of fracture for all R/t ratios. Furthermore, a series of parametric studies were conducted on the 3D element model, and the effects of tension level (clamping distance) and tooling friction on the failure modes/locations were investigated.

  7. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  8. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1997-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.

  9. PFEM-based modeling of industrial granular flows

    NASA Astrophysics Data System (ADS)

    Cante, J.; Dávalos, C.; Hernández, J. A.; Oliver, J.; Jonsén, P.; Gustafsson, G.; Häggblad, H.-Å.

    2014-05-01

    The potential of numerical methods for the solution and optimization of industrial granular flows problems is widely accepted by the industries of this field, the challenge being to promote effectively their industrial practice. In this paper, we attempt to make an exploratory step in this regard by using a numerical model based on continuous mechanics and on the so-called Particle Finite Element Method (PFEM). This goal is achieved by focusing two specific industrial applications in mining industry and pellet manufacturing: silo discharge and calculation of power draw in tumbling mills. Both examples are representative of variations on the granular material mechanical response—varying from a stagnant configuration to a flow condition. The silo discharge is validated using the experimental data, collected on a full-scale flat bottomed cylindrical silo. The simulation is conducted with the aim of characterizing and understanding the correlation between flow patterns and pressures for concentric discharges. In the second example, the potential of PFEM as a numerical tool to track the positions of the particles inside the drum is analyzed. Pressures and wall pressures distribution are also studied. The power draw is also computed and validated against experiments in which the power is plotted in terms of the rotational speed of the drum.

  10. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  11. Numerical Simulations For the F-16XL Aircraft Configuration

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.

    2014-01-01

    Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.

  12. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boldyrev, Stanislav; Perez, Jean Carlos

    The complete project had two major goals — investigate MHD turbulence generated by counterpropagating Alfven modes, and study such processes in the LAPD device. In order to study MHD turbulence in numerical simulations, two codes have been used: full MHD, and reduced MHD developed specialy for this project. Quantitative numerical results are obtained through high-resolution simulations of strong MHD turbulence, performed through the 2010 DOE INCITE allocation. We addressed the questions of the spectrum of turbulence, its universality, and the value of the so-called Kolmogorov constant (the normalization coefficient of the spectrum). In these simulations we measured with unprecedented accuracymore » the energy spectra of magnetic and velocity fluctuations. We also studied the so-called residual energy, that is, the difference between kinetic and magnetic energies in turbulent fluctuations. In our analytic work we explained generation of residual energy in weak MHD turbulence, in the process of random collisions of counterpropagating Alfven waves. We then generalized these results for the case of strong MHD turbulence. The developed model explained generation of residual energy is strong MHD turbulence, and verified the results in numerical simulations. We then analyzed the imbalanced case, where more Alfven waves propagate in one direction. We found that spectral properties of the residual energy are similar for both balanced and imbalanced cases. We then compared strong MHD turbulence observed in the solar wind with turbulence generated in numerical simulations. Nonlinear interaction of Alfv´en waves has been studied in the upgraded Large Plasma Device (LAPD). We have simulated the collision of the Alfven modes in the settings close to the experiment. We have created a train of wave packets with the apltitudes closed to those observed n the experiment, and allowed them to collide. We then saw the generation of the second harmonic, resembling that observed in the experiment.« less

  14. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  15. Hydrodynamic model of temperature change in open ionic channels.

    PubMed Central

    Chen, D P; Eisenberg, R S; Jerome, J W; Shu, C W

    1995-01-01

    Most theories of open ionic channels ignore heat generated by current flow, but that heat is known to be significant when analogous currents flow in semiconductors, so a generalization of the Poisson-Nernst-Planck theory of channels, called the hydrodynamic model, is needed. The hydrodynamic theory is a combination of the Poisson and Euler field equations of electrostatics and fluid dynamics, conservation laws that describe diffusive and convective flow of mass, heat, and charge (i.e., current), and their coupling. That is to say, it is a kinetic theory of solute and solvent flow, allowing heat and current flow as well, taking into account density changes, temperature changes, and electrical potential gradients. We integrate the equations with an essentially nonoscillatory shock-capturing numerical scheme previously shown to be stable and accurate. Our calculations show that 1) a significant amount of electrical energy is exchanged with the permeating ions; 2) the local temperature of the ions rises some tens of degrees, and this temperature rise significantly alters for ionic flux in a channel 25 A long, such as gramicidin-A; and 3) a critical parameter, called the saturation velocity, determines whether ionic motion is overdamped (Poisson-Nernst-Planck theory), is an intermediate regime (called the adiabatic approximation in semiconductor theory), or is altogether unrestricted (requiring the full hydrodynamic model). It seems that significant temperature changes are likely to accompany current flow in the open ionic channel. PMID:8599638

  16. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  17. Statistical physics of vehicular traffic and some related systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debashish; Santen, Ludger; Schadschneider, Andreas

    2000-05-01

    In the so-called “microscopic” models of vehicular traffic, attention is paid explicitly to each individual vehicle each of which is represented by a “particle”; the nature of the “interactions” among these particles is determined by the way the vehicles influence each others’ movement. Therefore, vehicular traffic, modeled as a system of interacting “particles” driven far from equilibrium, offers the possibility to study various fundamental aspects of truly nonequilibrium systems which are of current interest in statistical physics. Analytical as well as numerical techniques of statistical physics are being used to study these models to understand rich variety of physical phenomena exhibited by vehicular traffic. Some of these phenomena, observed in vehicular traffic under different circumstances, include transitions from one dynamical phase to another, criticality and self-organized criticality, metastability and hysteresis, phase-segregation, etc. In this critical review, written from the perspective of statistical physics, we explain the guiding principles behind all the main theoretical approaches. But we present detailed discussions on the results obtained mainly from the so-called “particle-hopping” models, particularly emphasizing those which have been formulated in recent years using the language of cellular automata.

  18. An efficient and guaranteed stable numerical method for continuous modeling of infiltration and redistribution with a shallow dynamic water table

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.

    2015-03-01

    We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.

  19. Numerous Seasonal Lineae on Coprates Montes, Mars

    NASA Image and Video Library

    2016-07-07

    The white arrows indicate locations in this scene where numerous seasonal dark streaks have been identified in the Coprates Montes area of Mars' Valles Marineris by repeated observations from orbit. The streaks, called recurring slope lineae or RSL, extend downslope during a warm season, fade in the colder part of the year, and repeat the process the next Martian year. They are regarded as the strongest evidence for the possibility of liquid water on the surface of modern Mars. This oblique perspective for this view uses a three-dimensional terrain model derived from a stereo pair of observations by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The scene covers an area approximately 1.6 miles (2.5 kilometers) wide. http://photojournal.jpl.nasa.gov/catalog/PIA20757

  20. Feasibility study for a numerical aerodynamic simulation facility. Volume 3: FMP language specification/user manual

    NASA Technical Reports Server (NTRS)

    Kenner, B. G.; Lincoln, N. R.

    1979-01-01

    The manual is intended to show the revisions and additions to the current STAR FORTRAN. The changes are made to incorporate an FMP (Flow Model Processor) for use in the Numerical Aerodynamic Simulation Facility (NASF) for the purpose of simulating fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The FORTRAN programming language for the STAR-100 computer contains both CDC and unique STAR extensions to the standard FORTRAN. Several of the STAR FORTRAN extensions to standard FOR-TRAN allow the FORTRAN user to exploit the vector processing capabilities of the STAR computer. In STAR FORTRAN, vectors can be expressed with an explicit notation, functions are provided that return vector results, and special call statements enable access to any machine instruction.

  1. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  2. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  3. A curious relationship between Potts glass models

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Chiaki

    2015-08-01

    A Potts glass model proposed by Nishimori and Stephen [H. Nishimori, M.J. Stephen, Phys. Rev. B 27, 5644 (1983)] is analyzed by means of the replica mean field theory. This model is a discrete model, has a gauge symmetry, and is called the Potts gauge glass model. By comparing the present results with the results of the conventional Potts glass model, we find the coincidences and differences between the models. We find a coincidence that the property for the Potts glass phase in this model is coincident with that in the conventional model at the mean field level. We find a difference that, unlike in the case of the conventional p-state Potts glass model, this system for large p does not become ferromagnetic at low temperature under a concentration of ferromagnetic interaction. The present results support the act of numerically investigating the present model for study of the Potts glass phase in finite dimensions.

  4. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  5. Critical exponents of the disorder-driven superfluid-insulator transition in one-dimensional Bose-Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cestari, J. C. C.; Foerster, A.; Gusmao, M. A.

    2011-11-15

    We investigate the nature of the superfluid-insulator quantum phase transition driven by disorder for noninteracting ultracold atoms on one-dimensional lattices. We consider two different cases: Anderson-type disorder, with local energies randomly distributed, and pseudodisorder due to a potential incommensurate with the lattice, which is usually called the Aubry-Andre model. A scaling analysis of numerical data for the superfluid fraction for different lattice sizes allows us to determine quantum critical exponents characterizing the disorder-driven superfluid-insulator transition. We also briefly discuss the effect of interactions close to the noninteracting quantum critical point of the Aubry-Andre model.

  6. Non linear dynamics of flame cusps: from experiments to modeling

    NASA Astrophysics Data System (ADS)

    Almarcha, Christophe; Radisson, Basile; Al-Sarraf, Elias; Quinard, Joel; Villermaux, Emmanuel; Denet, Bruno; Joulin, Guy

    2016-11-01

    The propagation of premixed flames in a medium initially at rest exhibits the appearance and competition of elementary local singularities called cusps. We investigate this problem both experimentally and numerically. An analytical solution of the two-dimensional Michelson Sivashinsky equation is obtained as a composition of pole solutions, which is compared with experimental flames fronts propagating between glass plates separated by a thin gap width. We demonstrate that the front dynamics can be reproduced numerically with a good accuracy, from the linear stages of destabilization to its late time evolution, using this model-equation. In particular, the model accounts for the experimentally observed steady distribution of distances between cusps, which is well-described by a one-parameter Gamma distribution, reflecting the aggregation type of interaction between the cusps. A modification of the Michelson Sivashinsky equation taking into account gravity allows to reproduce some other special features of these fronts. Aix-Marseille Univ., IRPHE, UMR 7342 CNRS, Centrale Marseille, Technopole de Château Gombert, 49 rue F. Joliot Curie, 13384 Marseille Cedex 13, France.

  7. Numerical analysis on the cutting and finishing efficiency of MRAFF process

    NASA Astrophysics Data System (ADS)

    Lih, F. L.

    2016-03-01

    The aim of the present research is to conduct a numerical study of the characteristic of a two-phase magnetorheological fluid with different operation conditions by the finite volume method called SIMPLE with an add-on MHD code.

  8. Pitch glide effect induced by a nonlinear string-barrier interaction

    NASA Astrophysics Data System (ADS)

    Kartofelev, Dmitri; Stulov, Anatoli; Välimäki, Vesa

    2015-10-01

    Interactions of a vibrating string with its supports and other spatially distributed barriers play a significant role in the physics of many stringed musical instruments. It is well known that the tone of the string vibrations is determined by the string supports, and that the boundary conditions of the string termination may cause a short-lasting initial fundamental frequency shifting. Generally, this phenomenon is associated with the nonlinear modulation of the stiff string tension. The aim of this paper is to study the initial frequency glide phenomenon that is induced only by the string-barrier interaction, apart from other possible physical causes, and without the interfering effects of dissipation and dispersion. From a numerical simulation perspective, this highly nonlinear problem may present various difficulties, not the least of which is the risk of numerical instability. We propose a numerically stable and a purely kinematic model of the string-barrier interaction, which is based on the travelling wave solution of the ideal string vibration. The model is capable of reproducing the motion of the vibrating string exhibiting the initial fundamental frequency glide, which is caused solely by the complex nonlinear interaction of the string with its termination. The results presented in this paper can expand our knowledge and understanding of the timbre evolution and the physical principles of sound generation of numerous stringed instruments, such as lutes called the tambura, sitar and biwa.

  9. Numerical simulation of photocurrent generation in bilayer organic solar cells: Comparison of master equation and kinetic Monte Carlo approaches

    NASA Astrophysics Data System (ADS)

    Casalegno, Mosè; Bernardi, Andrea; Raos, Guido

    2013-07-01

    Numerical approaches can provide useful information about the microscopic processes underlying photocurrent generation in organic solar cells (OSCs). Among them, the Kinetic Monte Carlo (KMC) method is conceptually the simplest, but computationally the most intensive. A less demanding alternative is potentially represented by so-called Master Equation (ME) approaches, where the equations describing particle dynamics rely on the mean-field approximation and their solution is attained numerically, rather than stochastically. The description of charge separation dynamics, the treatment of electrostatic interactions and numerical stability are some of the key issues which have prevented the application of these methods to OSC modelling, despite of their successes in the study of charge transport in disordered system. Here we describe a three-dimensional ME approach to photocurrent generation in OSCs which attempts to deal with these issues. The reliability of the proposed method is tested against reference KMC simulations on bilayer heterojunction solar cells. Comparison of the current-voltage curves shows that the model well approximates the exact result for most devices. The largest deviations in current densities are mainly due to the adoption of the mean-field approximation for electrostatic interactions. The presence of deep traps, in devices characterized by strong energy disorder, may also affect result quality. Comparison of the simulation times reveals that the ME algorithm runs, on the average, one order of magnitude faster than KMC.

  10. An analytic-geometric model of the effect of spherically distributed injection errors for Galileo and Ulysses spacecraft - The multi-stage problem

    NASA Technical Reports Server (NTRS)

    Longuski, James M.; Mcronald, Angus D.

    1988-01-01

    In previous work the problem of injecting the Galileo and Ulysses spacecraft from low earth orbit into their respective interplanetary trajectories has been discussed for the single stage (Centaur) vehicle. The central issue, in the event of spherically distributed injection errors, is what happens to the vehicle? The difficulties addressed in this paper involve the multi-stage problem since both Galileo and Ulysses will be utilizing the two-stage IUS system. Ulysses will also include a third stage: the PAM-S. The solution is expressed in terms of probabilities for total percentage of escape, orbit decay and reentry trajectories. Analytic solutions are found for Hill's Equations of Relative Motion (more recently called Clohessy-Wiltshire Equations) for multi-stage injections. These solutions are interpreted geometrically on the injection sphere. The analytic-geometric models compare well with numerical solutions, provide insight into the behavior of trajectories mapped on the injection sphere and simplify the numerical two-dimensional search for trajectory families.

  11. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  12. Generalized Predictive Control of Dynamic Systems with Rigid-Body Modes

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    2013-01-01

    Numerical simulations to assess the effectiveness of Generalized Predictive Control (GPC) for active control of dynamic systems having rigid-body modes are presented. GPC is a linear, time-invariant, multi-input/multi-output predictive control method that uses an ARX model to characterize the system and to design the controller. Although the method can accommodate both embedded (implicit) and explicit feedforward paths for incorporation of disturbance effects, only the case of embedded feedforward in which the disturbances are assumed to be unknown is considered here. Results from numerical simulations using mathematical models of both a free-free three-degree-of-freedom mass-spring-dashpot system and the XV-15 tiltrotor research aircraft are presented. In regulation mode operation, which calls for zero system response in the presence of disturbances, the simulations showed reductions of nearly 100%. In tracking mode operations, where the system is commanded to follow a specified path, the GPC controllers produced the desired responses, even in the presence of disturbances.

  13. Numerical modeling of the tensile strength of a biological granular aggregate: Effect of the particle size distribution

    NASA Astrophysics Data System (ADS)

    Heinze, Karsta; Frank, Xavier; Lullien-Pellerin, Valérie; George, Matthieu; Radjai, Farhang; Delenne, Jean-Yves

    2017-06-01

    Wheat grains can be considered as a natural cemented granular material. They are milled under high forces to produce food products such as flour. The major part of the grain is the so-called starchy endosperm. It contains stiff starch granules, which show a multi-modal size distribution, and a softer protein matrix that surrounds the granules. Experimental milling studies and numerical simulations are going hand in hand to better understand the fragmentation behavior of this biological material and to improve milling performance. We present a numerical study of the effect of granule size distribution on the strength of such a cemented granular material. Samples of bi-modal starch granule size distribution were created and submitted to uniaxial tension, using a peridynamics method. We show that, when compared to the effects of starch-protein interface adhesion and voids, the granule size distribution has a limited effect on the samples' yield stress.

  14. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  15. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.

  16. Model-Driven Useware Engineering

    NASA Astrophysics Data System (ADS)

    Meixner, Gerrit; Seissler, Marc; Breiner, Kai

    User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.

  17. Numerical approach to model independently reconstruct f (R ) functions through cosmographic data

    NASA Astrophysics Data System (ADS)

    Pizza, Liberato

    2015-06-01

    The challenging issue of determining the correct f (R ) among several possibilities is revised here by means of numerical reconstructions of the modified Friedmann equations around the redshift interval z ∈[0 ,1 ] . Frequently, a severe degeneracy between f (R ) approaches occurs, since different paradigms correctly explain present time dynamics. To set the initial conditions on the f (R ) functions, we involve the use of the so-called cosmography of the Universe, i.e., the technique of fixing constraints on the observable Universe by comparing expanded observables with current data. This powerful approach is essentially model independent, and correspondingly we got a model-independent reconstruction of f (R (z )) classes within the interval z ∈[0 ,1 ]. To allow the Hubble rate to evolve around z ≤1 , we considered three relevant frameworks of effective cosmological dynamics, i.e., the Λ CDM model, the Chevallier-Polarski-Linder parametrization, and a polynomial approach to dark energy. Finally, cumbersome algebra permits passing from f (z ) to f (R ), and the general outcome of our work is the determination of a viable f (R ) function, which effectively describes the observed Universe dynamics.

  18. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  19. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  20. Formation factor in Bentheimer and Fontainebleau sandstones: Theory compared with pore-scale numerical simulations

    NASA Astrophysics Data System (ADS)

    Ghanbarian, Behzad; Berg, Carl F.

    2017-09-01

    Accurate quantification of formation resistivity factor F (also called formation factor) provides useful insight into connectivity and pore space topology in fully saturated porous media. In particular the formation factor has been extensively used to estimate permeability in reservoir rocks. One of the widely applied models to estimate F is Archie's law (F = ϕ- m in which ϕ is total porosity and m is cementation exponent) that is known to be valid in rocks with negligible clay content, such as clean sandstones. In this study we compare formation factors determined by percolation and effective-medium theories as well as Archie's law with numerical simulations of electrical resistivity on digital rock models. These digital models represent Bentheimer and Fontainebleau sandstones and are derived either by reconstruction or directly from micro-tomographic images. Results show that the universal quadratic power law from percolation theory accurately estimates the calculated formation factor values in network models over the entire range of porosity. However, it crosses over to the linear scaling from the effective-medium approximation at the porosity of 0.75 in grid models. We also show that the effect of critical porosity, disregarded in Archie's law, is nontrivial, and the Archie model inaccurately estimates the formation factor in low-porosity homogeneous sandstones.

  1. Injection-Sensitive Mechanics of Hydraulic Fracture Interaction with Discontinuities

    NASA Astrophysics Data System (ADS)

    Chuprakov, D.; Melchaeva, O.; Prioul, R.

    2014-09-01

    We develop a new analytical model, called OpenT, that solves the elasticity problem of a hydraulic fracture (HF) contact with a pre-existing discontinuity natural fracture (NF) and the condition for HF re-initiation at the NF. The model also accounts for fluid penetration into the permeable NFs. For any angle of fracture intersection, the elastic problem of a blunted dislocation discontinuity is solved for the opening and sliding generated at the discontinuity. The sites and orientations of a new tensile crack nucleation are determined based on a mixed stress- and energy-criterion. In the case of tilted fracture intersection, the finite offset of the new crack initiation point along the discontinuity is computed. We show that aside from known controlling parameters such stress contrast, cohesional and frictional properties of the NFs and angle of intersection, the fluid injection parameters such as the injection rate and the fluid viscosity are of first-order in the crossing behavior. The model is compared to three independent laboratory experiments, analytical criteria of Blanton, extended Renshaw-Pollard, as well as fully coupled numerical simulations. The relative computational efficiency of OpenT model (compared to the numerical models) makes the model attractive for implementation in modern engineering tools simulating hydraulic fracture propagation in naturally fractured environments.

  2. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  3. A Secular Variation Model for Igrf-12 Based on Swarm Data and Inverse Geodynamo Modelling

    NASA Astrophysics Data System (ADS)

    Fournier, A.; Aubert, J.; Erwan, T.

    2014-12-01

    We are proposing a secular variation candidate model for the 12th generation of the international geomagnetic reference field, spanning the years 2015-2020. The novelty of our approach stands in the initialization of a 5-yr long integration of a numerical model of Earth's dynamo by means of inverse geodynamo modelling, as introduced by Aubert (GJI, 2014). This inverse technique combines the information coming from the observations (in the form of an instantaneous estimate of the Gauss coefficients for the magnetic field and its secular variation) with that coming from the multivariate statistics of a free run of a numerical model of the geodynamo. The Gauss coefficients and their error covariance properties are determined from Swarm data along the lines detailed by Thébault et al. (EPS, 2010). The numerical model of the geodynamo is the so-called Coupled Earth Dynamo model (Aubert et al., Nature, 2013), whose variability possesses a strong level of similarity with that of the geomagnetic field. We illustrate and assess the potential of this methodology by applying it to recent time intervals, with an initialization based on CHAMP data, and conclude by presenting our SV candidate, whose initialization is based on the 1st year of Swarm data This work is supported by the French "Agence Nationale de la Recherche" under the grant ANR-11-BS56-011 (http://avsgeomag.ipgp.fr) and by the CNES. References: Aubert, J., Geophys. J. Int. 197, 1321-1334, 2014, doi: 10.1093/gji/ggu064 Aubert, J., Finlay, C., Fournier, F. Nature 502, 219-223, 2013, doi: 10.1038/nature12574 Thébault E. , A. Chulliat, S. Maus, G. Hulot, B. Langais, A. Chambodut and M. Menvielle, Earth Planets Space, Vol. 62 (No. 10), pp. 753-763, 2010.

  4. On the Use and Validation of Mosaic Heterogeneity in Atmospheric Numerical Models

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The mosaic land modeling approach allows for the representation of multiple surface types in a single atmospheric general circulation model grid box. Each surface type, collectively called 'tiles' correspond to different sets of surface characteristics (e.g. for grass, crop or forest). Typically, the tile space data is averaged to grid space by weighting the tiles with their fractional cover. While grid space data is routinely evaluated, little attention has been given to the tile space data. The present paper explores uses of the tile space surface data in validation with station observations. The results indicate the limitations that the mosaic heterogeneity parameterization has in reproducing variations observed between stations at the Atmospheric Radiation Measurement Southern Great Plains field site.

  5. A modified homogeneous relaxation model for CO2 two-phase flow in vapour ejector

    NASA Astrophysics Data System (ADS)

    Haida, M.; Palacz, M.; Smolka, J.; Nowak, A. J.; Hafner, A.; Banasiak, K.

    2016-09-01

    In this study, the homogenous relaxation model (HRM) for CO2 flow in a two-phase ejector was modified in order to increase the accuracy of the numerical simulations The two- phase flow model was implemented on the effective computational tool called ejectorPL for fully automated and systematic computations of various ejector shapes and operating conditions. The modification of the HRM was performed by a change of the relaxation time and the constants included in the relaxation time equation based on the experimental result under the operating conditions typical for the supermarket refrigeration system. The modified HRM was compared to the HEM results, which were performed based on the comparison of motive nozzle and suction nozzle mass flow rates.

  6. An efficient approach to the analysis of rail surface irregularities accounting for dynamic train-track interaction and inelastic deformations

    NASA Astrophysics Data System (ADS)

    Andersson, Robin; Torstensson, Peter T.; Kabo, Elena; Larsson, Fredrik

    2015-11-01

    A two-dimensional computational model for assessment of rolling contact fatigue induced by discrete rail surface irregularities, especially in the context of so-called squats, is presented. Dynamic excitation in a wide frequency range is considered in computationally efficient time-domain simulations of high-frequency dynamic vehicle-track interaction accounting for transient non-Hertzian wheel-rail contact. Results from dynamic simulations are mapped onto a finite element model to resolve the cyclic, elastoplastic stress response in the rail. Ratcheting under multiple wheel passages is quantified. In addition, low cycle fatigue impact is quantified using the Jiang-Sehitoglu fatigue parameter. The functionality of the model is demonstrated by numerical examples.

  7. Virtual Power Electronics: Novel Software Tools for Design, Modeling and Education

    NASA Astrophysics Data System (ADS)

    Hamar, Janos; Nagy, István; Funato, Hirohito; Ogasawara, Satoshi; Dranga, Octavian; Nishida, Yasuyuki

    The current paper is dedicated to present browser-based multimedia-rich software tools and e-learning curriculum to support the design and modeling process of power electronics circuits and to explain sometimes rather sophisticated phenomena. Two projects will be discussed. The so-called Inetele project is financed by the Leonardo da Vinci program of the European Union (EU). It is a collaborative project between numerous EU universities and institutes to develop state-of-the art curriculum in Electrical Engineering. Another cooperative project with participation of Japanese, European and Australian institutes focuses especially on developing e-learning curriculum, interactive design and modeling tools, furthermore on development of a virtual laboratory. Snapshots from these two projects will be presented.

  8. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  9. On the convergence of a fully discrete scheme of LES type to physically relevant solutions of the incompressible Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Berselli, Luigi C.; Spirito, Stefano

    2018-06-01

    Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.

  10. Spatiotemporal variability and sound characterization in silver croaker Plagioscion squamosissimus (Sciaenidae) in the Central Amazon.

    PubMed

    Borie, Alfredo; Mok, Hin-Kiu; Chao, Ning L; Fine, Michael L

    2014-01-01

    The fish family Sciaenidae has numerous species that produce sounds with superfast muscles that vibrate the swimbladder. These muscles form post embryonically and undergo seasonal hypertrophy-atrophy cycles. The family has been the focus of numerous passive acoustic studies to localize spatial and temporal occurrence of spawning aggregations. Fishes produce disturbance calls when hand-held, and males form aggregations in late afternoon and produce advertisement calls to attract females for mating. Previous studies on five continents have been confined to temperate species. Here we examine the calls of the silver croaker Plagioscion squamosissimus, a freshwater equatorial species, which experiences constant photoperiod, minimal temperature variation but seasonal changes in water depth and color, pH and conductivity. Dissections indicate that sonic muscles are present exclusively in males and that muscles are thicker and redder during the mating season. Disturbance calls were recorded in hand-held fish during the low-water mating season and high-water period outside of the mating season. Advertisement calls were recorded from wild fish that formed aggregations in both periods but only during the mating season from fish in large cages. Disturbance calls consist of a series of short individual pulses in mature males. Advertisement calls start with single and paired pulses followed by greater amplitude multi-pulse bursts with higher peak frequencies than in disturbance calls. Advertisement-like calls also occur in aggregations during the off season, but bursts are shorter with fewer pulses. Silver croaker produce complex advertisement calls that vary in amplitude, number of cycles per burst and burst duration of their calls. Unlike temperate sciaenids, which only call during the spawning season, silver croaker produce advertisement calls in both seasons. Sonic muscles are thinner, and bursts are shorter than at the spawning peak, but males still produce complex calls outside of the mating season.

  11. Spatiotemporal Variability and Sound Characterization in Silver Croaker Plagioscion squamosissimus (Sciaenidae) in the Central Amazon

    PubMed Central

    Borie, Alfredo; Mok, Hin-Kiu; Chao, Ning L.; Fine, Michael L.

    2014-01-01

    Background The fish family Sciaenidae has numerous species that produce sounds with superfast muscles that vibrate the swimbladder. These muscles form post embryonically and undergo seasonal hypertrophy-atrophy cycles. The family has been the focus of numerous passive acoustic studies to localize spatial and temporal occurrence of spawning aggregations. Fishes produce disturbance calls when hand-held, and males form aggregations in late afternoon and produce advertisement calls to attract females for mating. Previous studies on five continents have been confined to temperate species. Here we examine the calls of the silver croaker Plagioscion squamosissimus, a freshwater equatorial species, which experiences constant photoperiod, minimal temperature variation but seasonal changes in water depth and color, pH and conductivity. Methods and Principal Findings Dissections indicate that sonic muscles are present exclusively in males and that muscles are thicker and redder during the mating season. Disturbance calls were recorded in hand-held fish during the low-water mating season and high-water period outside of the mating season. Advertisement calls were recorded from wild fish that formed aggregations in both periods but only during the mating season from fish in large cages. Disturbance calls consist of a series of short individual pulses in mature males. Advertisement calls start with single and paired pulses followed by greater amplitude multi-pulse bursts with higher peak frequencies than in disturbance calls. Advertisement-like calls also occur in aggregations during the off season, but bursts are shorter with fewer pulses. Conclusions and Significance Silver croaker produce complex advertisement calls that vary in amplitude, number of cycles per burst and burst duration of their calls. Unlike temperate sciaenids, which only call during the spawning season, silver croaker produce advertisement calls in both seasons. Sonic muscles are thinner, and bursts are shorter than at the spawning peak, but males still produce complex calls outside of the mating season. PMID:25098347

  12. Analytical investigation of the faster-is-slower effect with a simplified phenomenological model

    NASA Astrophysics Data System (ADS)

    Suzuno, K.; Tomoeda, A.; Ueyama, D.

    2013-11-01

    We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.

  13. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    NASA Astrophysics Data System (ADS)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.

  14. Thermal runaway and microwave heating in thin cylindrical domains

    NASA Astrophysics Data System (ADS)

    Ward, Michael J.

    2002-04-01

    The behaviour of the solution to two nonlinear heating problems in a thin cylinder of revolution of variable cross-sectional area is analysed using asymptotic and numerical methods. The first problem is to calculate the fold point, corresponding to the onset of thermal runaway, for a steady-state nonlinear elliptic equation that arises in combustion theory. In the limit of thin cylindrical domains, it is shown that the onset of thermal runaway can be delayed when a circular cylindrical domain is perturbed into a dumbell shape. Numerical values for the fold point for different domain shapes are obtained asymptotically and numerically. The second problem that is analysed is a nonlinear parabolic equation modelling the microwave heating of a ceramic cylinder by a known electric field. The basic model in a thin circular cylindrical domain was analysed in Booty & Kriegsmann (Meth. Appl. Anal. 4 (1994) p. 403). Their analysis is extended to treat thin cylindrical domains of variable cross-section. It is shown that the steady-state and dynamic behaviours of localized regions of high temperature, called hot-spots, depend on a competition between the maxima of the electric field and the maximum deformation of the circular cylinder. For a dumbell-shaped region it is shown that two disconnected hot-spot regions can occur. Depending on the parameters in the model, these regions, ultimately, either merge as time increases or else remain as disconnected regions for all time.

  15. Hygrothermal behavior for a clay brick wall

    NASA Astrophysics Data System (ADS)

    Allam, R.; Issaadi, N.; Belarbi, R.; El-Meligy, M.; Altahrany, A.

    2018-06-01

    In Egypt, the clay brick is the common building materials which are used. By studying clay brick walls behavior for the heat and moisture transfer, the efficient use of the clay brick can be reached. So, this research studies the hygrothermal transfer in this material by measuring the hygrothermal properties and performing experimental tests for a constructed clay brick wall. We present the model for the hygrothermal transfer in the clay brick which takes the temperature and the vapor pressure as driving potentials. In addition, this research compares the presented model with previous models. By constructing the clay brick wall between two climates chambers with different boundary conditions, we can validate the numerical model and analyze the hygrothermal transfer in the wall. The temperature and relative humidity profiles within the material are measured experimentally and determined numerically. The numerical and experimental results have a good convergence with 3.5% difference. The surface boundary conditions, the ground effect, the infiltration from the closed chambers and the material heterogeneity affects the results. Thermal transfer of the clay brick walls reaches the steady state very rapidly than the moisture transfer. That means the effect of using only the external brick wall in the building in hot climate without increase the thermal resistance for the wall, will add more energy losses in the clay brick walls buildings. Also, the behavior of the wall at the heat and mass transfer calls the three-dimensional analysis for the whole building to reach the real behavior.

  16. Hygrothermal behavior for a clay brick wall

    NASA Astrophysics Data System (ADS)

    Allam, R.; Issaadi, N.; Belarbi, R.; El-Meligy, M.; Altahrany, A.

    2018-01-01

    In Egypt, the clay brick is the common building materials which are used. By studying clay brick walls behavior for the heat and moisture transfer, the efficient use of the clay brick can be reached. So, this research studies the hygrothermal transfer in this material by measuring the hygrothermal properties and performing experimental tests for a constructed clay brick wall. We present the model for the hygrothermal transfer in the clay brick which takes the temperature and the vapor pressure as driving potentials. In addition, this research compares the presented model with previous models. By constructing the clay brick wall between two climates chambers with different boundary conditions, we can validate the numerical model and analyze the hygrothermal transfer in the wall. The temperature and relative humidity profiles within the material are measured experimentally and determined numerically. The numerical and experimental results have a good convergence with 3.5% difference. The surface boundary conditions, the ground effect, the infiltration from the closed chambers and the material heterogeneity affects the results. Thermal transfer of the clay brick walls reaches the steady state very rapidly than the moisture transfer. That means the effect of using only the external brick wall in the building in hot climate without increase the thermal resistance for the wall, will add more energy losses in the clay brick walls buildings. Also, the behavior of the wall at the heat and mass transfer calls the three-dimensional analysis for the whole building to reach the real behavior.

  17. The effects on the ionosphere of inertia in the high latitude neutral thermosphere

    NASA Technical Reports Server (NTRS)

    Burns, Alan; Killeen, Timothy

    1993-01-01

    High-latitude ionospheric currents, plasma temperatures, densities, and composition are all affected by the time-dependent response of the neutral thermosphere to ion drag and Joule heating through a variety of complex feedback processes. These processes can best be studied numerically using the appropriate nonlinear numerical modeling techniques in conjunction with experimental case studies. In particular, the basic physics of these processes can be understood using a model, and these concepts can then be applied to more complex realistic situations by developing the appropriate simulations of real events. Finally, these model results can be compared with satellite-derived data from the thermosphere. We used numerical simulations from the National Center of Atmospheric Research Thermosphere/Ionosphere General Circulation Model (NCAR TIGCM) and data from the Dynamic Explorer 2 (DE 2) satellite to study the time-dependent effects of the inertia of the neutral thermosphere on ionospheric currents, plasma temperatures, densities, and composition. One particular case of these inertial effects is the so-called 'fly-wheel effect'. This effect occurs when the neutral gas, that has been spun-up by the large ionospheric winds associated with a geomagnetic storm, moves faster than the ions in the period after the end of the main phase of the storm. In these circumstances, the neutral gas can drag the ions along with them. It is this last effect, which is described in the next section, that we have studied under this grant.

  18. SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.

    2018-03-01

    We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.

  19. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... as follows: Normal modification Provisioned items order (reserved for exclusive use by the Air Force... supplementary number will be ARZ998, and on down as needed. (6) Each office authorized to issue modifications...) Modifications to calls or orders. Use a two position alpha-numeric suffix, known as a call or order modification...

  20. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... as follows: Normal modification Provisioned items order (reserved for exclusive use by the Air Force... supplementary number will be ARZ998, and on down as needed. (6) Each office authorized to issue modifications...) Modifications to calls or orders. Use a two position alpha-numeric suffix, known as a call or order modification...

  1. Enhancing the Design and Analysis of Flipped Learning Strategies

    ERIC Educational Resources Information Center

    Jenkins, Martin; Bokosmaty, Rena; Brown, Melanie; Browne, Chris; Gao, Qi; Hanson, Julie; Kupatadze, Ketevan

    2017-01-01

    There are numerous calls in the literature for research into the flipped learning approach to match the flood of popular media articles praising its impact on student learning and educational outcomes. This paper addresses those calls by proposing pedagogical strategies that promote active learning in "flipped" approaches and improved…

  2. Effect Sizes in Gifted Education Research

    ERIC Educational Resources Information Center

    Gentry, Marcia; Peters, Scott J.

    2009-01-01

    Recent calls for reporting and interpreting effect sizes have been numerous, with the 5th edition of the "Publication Manual of the American Psychological Association" (2001) calling for the inclusion of effect sizes to interpret quantitative findings. Many top journals have required that effect sizes accompany claims of statistical significance.…

  3. Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM

    NASA Astrophysics Data System (ADS)

    Cajka, R.; Vaskova, J.; Vasek, J.

    2018-04-01

    For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.

  4. Field and bioassay indicators for internal dose intervention therapy.

    PubMed

    Carbaugh, Eugene H

    2007-05-01

    Guidance is presented that is used at the U.S. Department of Energy Hanford Site to identify the potential need for medical intervention in response to intakes of radioactivity. The guidance, based on ICRP Publication 30 models and committed effective dose equivalents of 20 mSv and 200 mSv, is expressed as numerical workplace measurements and derived first-day bioassay results for large intakes. It is used by facility radiation protection staff and on-call dosimetry support staff during the first few days following an intake.

  5. Field and Bioassay Indicators for Internal Dose Intervention Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, Eugene H.

    2007-05-01

    Guidance is presented that is used at the U.S. Department of Energy Hanford Site to identify the potential need for medical intervention in response to intakes of radioactivity. The guidance, based on ICRP Publication 30 models and committed effective dose equivalents of 20 mSv and 200 mSv, is expressed as numerical workplace measurements and derived first-day bioassay results for large intakes. It is used by facility radiation protection staff and on-call dosimetry support staff during the first few days following an intake.

  6. Computations in turbulent flows and off-design performance predictions for airframe-integrated scramjets

    NASA Technical Reports Server (NTRS)

    Goglia, G. L.; Spiegler, E.

    1977-01-01

    The research activity focused on two main tasks: (1) the further development of the SCRAM program and, in particular, the addition of a procedure for modeling the mechanism of the internal adjustment process of the flow, in response to the imposed thermal load across the combustor and (2) the development of a numerical code for the computation of the variation of concentrations throughout a turbulent field, where finite-rate reactions occur. The code also includes an estimation of the effect of the phenomenon called 'unmixedness'.

  7. An Unconditionally Stable, Positivity-Preserving Splitting Scheme for Nonlinear Black-Scholes Equation with Transaction Costs

    PubMed Central

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable. PMID:24895653

  8. An unconditionally stable, positivity-preserving splitting scheme for nonlinear Black-Scholes equation with transaction costs.

    PubMed

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable.

  9. Numerical simulations of compact intracloud discharges as the Relativistic Runaway Electron Avalanche-Extensive Air Shower process

    NASA Astrophysics Data System (ADS)

    Arabshahi, S.; Dwyer, J. R.; Nag, A.; Rakov, V. A.; Rassoul, H. K.

    2014-01-01

    Compact intracloud discharges (CIDs) are sources of the powerful, often isolated radio pulses emitted by thunderstorms. The VLF-LF radio pulses are called narrow bipolar pulses (NBPs). It is still not clear how CIDs are produced, but two categories of theoretical models that have previously been considered are the Transmission Line (TL) model and the Relativistic Runaway Electron Avalanche-Extensive Air Showers (RREA-EAS) model. In this paper, we perform numerical calculations of RREA-EASs for various electric field configurations inside thunderstorms. The results of these calculations are compared to results from the other models and to the experimental data. Our analysis shows that different theoretical models predict different fundamental characteristics for CIDs. Therefore, many previously published properties of CIDs are highly model dependent. This is because of the fact that measurements of the radiation field usually provide information about the current moment of the source, and different physical models with different discharge currents could have the same current moment. We have also found that although the RREA-EAS model could explain the current moments of CIDs, the required electric fields in the thundercloud are rather large and may not be realistic. Furthermore, the production of NBPs from RREA-EAS requires very energetic primary cosmic ray particles, not observed in nature. If such ultrahigh-energy particles were responsible for NBPs, then they should be far less frequent than is actually observed.

  10. The Numerical Simulation of Coupling Behavior of Soil with Chemical Pollutant Effects

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Li, X. K.; Tang, L. Q.

    2010-05-01

    The coupling behavior of clay plays a role in the integrity of clay barriers used in landfills. The clay barriers are subjected to mechanical and thermal effects coupled with hydraulic behavior, also, if the leachates become in contact with the clay liner, chemical effects may lead to some drastic changes in the properties of the clay. A numerical method to simulate the coupling behavior of soil with chemical pollutant effects is presented. Within the framework of Gens-Alonso model describing the constitutive behavior of unsaturated clay presented in reference[1], basing on the work of Wu[2] and Hueckel[3], a constitutive model describing the chemo-thermo-hydro-mechanical(CTHM) coupling behavior of clays in contact with a single organic contaminant is presented. The thermical softening and chemical softening is considered in the presented model. The strain arising in the material due to chemical and thermical effects can be decomposed into two parts: elastic expansion and plastic compaction. The chemical effects are described in terms of the mass concentration of the contaminant. The increases in temperature and contaminant concentration cause decreases of the pre-consolidation pressure and the cohesion. The mechanisms are called thermical softening and chemical softening. The presented coupled CTHM constitutive model has been integrated into the coupled thermo-hydro-mechanical mathematical model including contaminant transport in porous media. To solve the equilibrium equations, the grogram of finite element methods is developed with a stagger algorithm. The mechanisms taking place due to the coupling behaviour of the clay with a single contaminant solute are analysed with the presented numerical method.

  11. Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea

    NASA Astrophysics Data System (ADS)

    Jun, K.; Tak, W.; JUN, B. H.; Lee, H. J.; KIM, S. D.

    2016-12-01

    Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea Kye-Won Jun*, Won-Jun Tak*, Byong-Hee Jun**, Ho-Jin Lee***, Soung-Doug Kim* *Graduate School of Disaster Prevention, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea **School of Fire and Disaster Protection, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea ***School of Civil Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju, Korea Abstract As more than 64% of the land in South Korea is mountainous area, so many regions in South Korea are exposed to the danger of landslide and debris flow. So it is important to understand the behavior of debris flow in mountainous terrains, the various methods and models are being presented and developed based on the mathematical concept. The purpose of this study is to investigate the regions that experienced the debris flow due to typhoon called Ewiniar and to perform numerical modeling to design and layout of the Check dam for reducing the damage by the debris flow. For the performance of numerical modeling, on-site measurement of the research area was conducted including: topographic investigation, research on bridges in the downstream, and precision LiDAR 3D scanning for composed basic data of numerical modeling. The numerical simulation of this study was performed using RAMMS (Rapid Mass Movements Simulation) model for the analysis of the debris flow. This model applied to the conditions of the Check dam which was installed in the upstream, midstream, and downstream. Considering the reduction effect of debris flow, the expansion of debris flow, and the influence on the bridges in the downstream, proper location of the Check dam was designated. The result of present numerical model showed that when the Check dam was installed in the downstream section, 50 m above the bridge, the reduction effect of the debris flow was higher compared to when the Check dam were installed in other sections. Key words: Debris flow, LiDAR, Check dam, RAMMSAcknowledgementsThis research was supported by a grant [MPSS-NH-2014-74] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government

  12. A comparison of the primal and semi-dual variational formats of gradient-extended crystal inelasticity

    NASA Astrophysics Data System (ADS)

    Carlsson, Kristoffer; Runesson, Kenneth; Larsson, Fredrik; Ekh, Magnus

    2017-10-01

    In this paper we discuss issues related to the theoretical as well as the computational format of gradient-extended crystal viscoplasticity. The so-called primal format uses the displacements, the slip of each slip system and the dissipative stresses as the primary unknown fields. An alternative format is coined the semi-dual format, which in addition includes energetic microstresses among the primary unknown fields. We compare the primal and semi-dual variational formats in terms of advantages and disadvantages from modeling as well as numerical viewpoints. Finally, we perform a series of representative numerical tests to investigate the rate of convergence with finite element mesh refinement. In particular, it is shown that the commonly adopted microhard boundary condition poses a challenge in the special case that the slip direction is parallel to a grain boundary.

  13. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  14. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  15. Electrodiffusion Models of Neurons and Extracellular Space Using the Poisson-Nernst-Planck Equations—Numerical Simulation of the Intra- and Extracellular Potential for an Axon Model

    PubMed Central

    Pods, Jurgis; Schönke, Johannes; Bastian, Peter

    2013-01-01

    In neurophysiology, extracellular signals—as measured by local field potentials (LFP) or electroencephalography—are of great significance. Their exact biophysical basis is, however, still not fully understood. We present a three-dimensional model exploiting the cylinder symmetry of a single axon in extracellular fluid based on the Poisson-Nernst-Planck equations of electrodiffusion. The propagation of an action potential along the axonal membrane is investigated by means of numerical simulations. Special attention is paid to the Debye layer, the region with strong concentration gradients close to the membrane, which is explicitly resolved by the computational mesh. We focus on the evolution of the extracellular electric potential. A characteristic up-down-up LFP waveform in the far-field is found. Close to the membrane, the potential shows a more intricate shape. A comparison with the widely used line source approximation reveals similarities and demonstrates the strong influence of membrane currents. However, the electrodiffusion model shows another signal component stemming directly from the intracellular electric field, called the action potential echo. Depending on the neuronal configuration, this might have a significant effect on the LFP. In these situations, electrodiffusion models should be used for quantitative comparisons with experimental data. PMID:23823244

  16. Explicit simulation of ice particle habits in a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Hashino, Tempei

    2007-05-01

    This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.

  17. Common Analysis Tool Being Developed for Aeropropulsion: The National Cycle Program Within the Numerical Propulsion System Simulation Environment

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia G.

    1999-01-01

    The NASA Lewis Research Center is developing an environment for analyzing and designing aircraft engines-the Numerical Propulsion System Simulation (NPSS). NPSS will integrate multiple disciplines, such as aerodynamics, structure, and heat transfer, and will make use of numerical "zooming" on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS uses the latest computing and communication technologies to capture complex physical processes in a timely, cost-effective manner. The vision of NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Through the NASA/Industry Cooperative Effort agreement, NASA Lewis and industry partners are developing a new engine simulation called the National Cycle Program (NCP). NCP, which is the first step toward NPSS and is its initial framework, supports the aerothermodynamic system simulation process for the full life cycle of an engine. U.S. aircraft and airframe companies recognize NCP as the future industry standard common analysis tool for aeropropulsion system modeling. The estimated potential payoff for NCP is a $50 million/yr savings to industry through improved engineering productivity.

  18. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 4: Computer user's manual for UAAP turboprop aeroacoustic code

    NASA Astrophysics Data System (ADS)

    Menthe, R. W.; McColgan, C. J.; Ladden, R. M.

    1991-05-01

    The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.

  19. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 4: Computer user's manual for UAAP turboprop aeroacoustic code

    NASA Technical Reports Server (NTRS)

    Menthe, R. W.; Mccolgan, C. J.; Ladden, R. M.

    1991-01-01

    The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.

  20. Thermo-Mechanical Modeling of Laser-Mig Hybrid Welding (lmhw)

    NASA Astrophysics Data System (ADS)

    Kounde, Ludovic; Engel, Thierry; Bergheau, Jean-Michel; Boisselier, Didier

    2011-01-01

    Hybrid welding is a combination of two different technologies such as laser (Nd: YAG, CO2…) and electric arc welding (MIG, MAG / TIG …) developed to assemble thick metal sheets (over 3 mm) in order to reduce the required laser power. As a matter of fact, hybrid welding is a lso used in the welding of thin materials to benefit from process, deep penetration and gap limit. But the thermo-mechanical behaviour of thin parts assembled by LMHW technology for railway cars production is far from being controlled the modeling and simulation contribute to the assessment of the causes and effects of the thermo mechanical behaviour in the assembled parts. In order to reproduce the morphology of melted and heat-affected zones, two analytic functions were combined to model the heat source of LMHW. On one hand, we applied a so-called "diaboloïd" (DB) which is a modified hyperboloid, based on experimental parameters and the analysis of the macrographs of the welds. On the other hand, we used a so-called "double ellipsoïd" (DE) which takes the MIG only contribution including the bead into account. The comparison between experimental result and numerical result shows a good agreement.

  1. Bilinear effect in complex systems

    NASA Astrophysics Data System (ADS)

    Lam, Lui; Bellavia, David C.; Han, Xiao-Pu; Alston Liu, Chih-Hui; Shu, Chang-Qing; Wei, Zhengjin; Zhou, Tao; Zhu, Jichen

    2010-09-01

    The distribution of the lifetime of Chinese dynasties (as well as that of the British Isles and Japan) in a linear Zipf plot is found to consist of two straight lines intersecting at a transition point. This two-section piecewise-linear distribution is different from the power law or the stretched exponent distribution, and is called the Bilinear Effect for short. With assumptions mimicking the organization of ancient Chinese regimes, a 3-layer network model is constructed. Numerical results of this model show the bilinear effect, providing a plausible explanation of the historical data. The bilinear effect in two other social systems is presented, indicating that such a piecewise-linear effect is widespread in social systems.

  2. Study of photon strength functions via (γ→, γ', γ″) reactions at the γ3-setup

    NASA Astrophysics Data System (ADS)

    Isaak, Johann; Savran, Deniz; Beck, Tobias; Gayer, Udo; Krishichayan; Löher, Bastian; Pietralla, Norbert; Scheck, Marcus; Tornow, Werner; Werner, Volker; Zilges, Andreas

    2018-05-01

    One of the basic ingredients for the modelling of the nucleosynthesis of heavy elements are so-called photon strength functions and the assumption of the Brink-Axel hypothesis. This hypothesis has been studied for many years by numerous experiments using different and complementary reactions. The present manuscript aims to introduce a model-independent approach to study photon strength functions via γ-γ coincidence spectroscopy of photoexcited states in 128Te. The experimental results provide evidence that the photon strength function extracted from photoabsorption cross sections is not in an overall agreement with the one determined from direct transitions to low-lying excited states.

  3. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  4. Hydrodynamics at Mouth of Colorado River, Texas, Project. Numerical Model Investigation

    DTIC Science & Technology

    1992-09-01

    in de - tail by Thomas and McAnally (1985). 2. The three basic components of the system are as follows: £. "A Two-Dimensional Model for Free Surface...into smaller subareas, which are called ele - ments. The dependent variables (e.g., water-surface elevations and sediment A2 -iriI r IJ I I I .E 7-1 a...x hi auLa + hh ý- + hu L- + hv - + h x at ax 2y + vx 2 1/ 2 ) + gun 22 + v 2 ]1cos h - 2hwv sin 4 - 0 (Al) 1l.486hl1/6)1 hu L- + hv L -h e+ey ha y at

  5. 17 CFR 17.00 - Information to be furnished by futures commission merchants, clearing members and foreign brokers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 2 AN Exchange Code. 30 1 AN Put or Call. 31 5 AN Commodity Code (1). 36 8 AN Expiration Date (1). 44... Commodity Code (2). 71 8 AN Expiration Date (2). 79 2 Reserved. 80 1 AN Record Type. 1 AN—Alpha—numeric, N—Numeric, S—Signed numeric. (2) Field definitions are as follows: (i) Report type. This report format will...

  6. Mathematical modeling of the Stirling engine in terms of applying the composition of the power complex containing non-conventional and renewable energy

    NASA Astrophysics Data System (ADS)

    Gaponenko, A. M.; Kagramanova, A. A.

    2017-11-01

    The opportunity of application of Stirling engine with non-conventional and renewable sources of energy. The advantage of such use. The resulting expression for the thermal efficiency of the Stirling engine. It is shown that the work per cycle is proportional to the quantity of matter, and hence the pressure of the working fluid, the temperature difference and, to a lesser extent, depends on the expansion coefficient; efficiency of ideal Stirling cycle coincides with the efficiency of an ideal engine working on the Carnot cycle, which distinguishes a Stirling cycle from the cycles of Otto and Diesel underlying engine. It has been established that the four input parameters, the only parameter which can be easily changed during operation, and which effectively affects the operation of the engine is the phase difference. Dependence of work per cycle of the phase difference, called the phase characteristic, visually illustrates mode of operation of Stirling engine. The mathematical model of the cycle of Schmidt and the analysis of operation of Stirling engine in the approach of Schmidt with the aid of numerical analysis. To conduct numerical experiments designed program feature in the language MathLab. The results of numerical experiments are illustrated by graphical charts.

  7. Numerical simulation of the nonlinear response of composite plates under combined thermal and acoustic loading

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Moorthy, Jayashree

    1995-01-01

    A time-domain study of the random response of a laminated plate subjected to combined acoustic and thermal loads is carried out. The features of this problem also include given uniform static inplane forces. The formulation takes into consideration a possible initial imperfection in the flatness of the plate. High decibel sound pressure levels along with high thermal gradients across thickness drive the plate response into nonlinear regimes. This calls for the analysis to use von Karman large deflection strain-displacement relationships. A finite element model that combines the von Karman strains with the first-order shear deformation plate theory is developed. The development of the analytical model can accommodate an anisotropic composite laminate built up of uniformly thick layers of orthotropic, linearly elastic laminae. The global system of finite element equations is then reduced to a modal system of equations. Numerical simulation using a single-step algorithm in the time-domain is then carried out to solve for the modal coordinates. Nonlinear algebraic equations within each time-step are solved by the Newton-Raphson method. The random gaussian filtered white noise load is generated using Monte Carlo simulation. The acoustic pressure distribution over the plate is capable of accounting for a grazing incidence wavefront. Numerical results are presented to study a variety of cases.

  8. Numerical solution of stiff systems of ordinary differential equations with applications to electronic circuits

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1971-01-01

    Systems of ordinary differential equations in which the magnitudes of the eigenvalues (or time constants) vary greatly are commonly called stiff. Such systems of equations arise in nuclear reactor kinetics, the flow of chemically reacting gas, dynamics, control theory, circuit analysis and other fields. The research reported develops an A-stable numerical integration technique for solving stiff systems of ordinary differential equations. The method, which is called the generalized trapezoidal rule, is a modification of the trapezoidal rule. However, the method is computationally more efficient than the trapezoidal rule when the solution of the almost-discontinuous segments is being calculated.

  9. Evaluating Blended and Flipped Instruction in Numerical Methods at Multiple Engineering Schools

    ERIC Educational Resources Information Center

    Clark, Renee; Kaw, Autar; Lou, Yingyan; Scott, Andrew; Besterfield-Sacre, Mary

    2018-01-01

    With the literature calling for comparisons among technology-enhanced or active-learning pedagogies, a blended versus flipped instructional comparison was made for numerical methods coursework using three engineering schools with diverse student demographics. This study contributes to needed comparisons of enhanced instructional approaches in STEM…

  10. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  11. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    PubMed

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. New, Improved Bulk-microphysical Schemes for Studying Precipitation Processes in WRF. Part 1; Comparisons with Other Schemes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Shi, J.; Chen, S. S> ; Lang, S.; Hong, S.-Y.; Thompson, G.; Peters-Lidard, C.; Hou, A.; Braun, S.; hide

    2007-01-01

    Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)

  13. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  14. Evolution of the single-mode Rayleigh-Taylor instability under the influence of time-dependent accelerations

    NASA Astrophysics Data System (ADS)

    Ramaprabhu, P.; Karkhanis, V.; Banerjee, R.; Varshochi, H.; Khan, M.; Lawrie, A. G. W.

    2016-01-01

    From nonlinear models and direct numerical simulations we report on several findings of relevance to the single-mode Rayleigh-Taylor (RT) instability driven by time-varying acceleration histories. The incompressible, direct numerical simulations (DNSs) were performed in two (2D) and three dimensions (3D), and at a range of density ratios of the fluid combinations (characterized by the Atwood number). We investigated several acceleration histories, including acceleration profiles of the general form g (t ) ˜tn , with n ≥0 and acceleration histories reminiscent of the linear electric motor experiments. For the 2D flow, results from numerical simulations compare well with a 2D potential flow model and solutions to a drag-buoyancy model reported as part of this work. When the simulations are extended to three dimensions, bubble and spike growth rates are in agreement with the so-called level 2 and level 3 models of Mikaelian [K. O. Mikaelian, Phys. Rev. E 79, 065303(R) (2009), 10.1103/PhysRevE.79.065303], and with corresponding 3D drag-buoyancy model solutions derived in this article. Our generalization of the RT problem to study variable g (t ) affords us the opportunity to investigate the appropriate scaling for bubble and spike amplitudes under these conditions. We consider two candidates, the displacement Z and width s2, but find the appropriate scaling is dependent on the density ratios between the fluids—at low density ratios, bubble and spike amplitudes are explained by both s2 and Z , while at large density differences the displacement collapses the spike data. Finally, for all the acceleration profiles studied here, spikes enter a free-fall regime at lower Atwood numbers than predicted by all the models.

  15. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  16. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

    PubMed Central

    De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle

    2018-01-01

    Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770

  17. Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator

    NASA Astrophysics Data System (ADS)

    Al-Rabadi, Anas N.

    2009-10-01

    This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.

  18. Modeling colony collapse disorder in honeybees as a contagion.

    PubMed

    Kribs-Zaleta, Christopher M; Mitchell, Christopher

    2014-12-01

    Honeybee pollination accounts annually for over $14 billion in United States agriculture alone. Within the past decade there has been a mysterious mass die-off of honeybees, an estimated 10 million beehives and sometimes as much as 90% of an apiary. There is still no consensus on what causes this phenomenon, called Colony Collapse Disorder, or CCD. Several mathematical models have studied CCD by only focusing on infection dynamics. We created a model to account for both healthy hive dynamics and hive extinction due to CCD, modeling CCD via a transmissible infection brought to the hive by foragers. The system of three ordinary differential equations accounts for multiple hive population behaviors including Allee effects and colony collapse. Numerical analysis leads to critical hive sizes for multiple scenarios and highlights the role of accelerated forager recruitment in emptying hives during colony collapse.

  19. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  20. A mathematical model of transmission of rice tungro disease by Nephotettix Virescens

    NASA Astrophysics Data System (ADS)

    Blas, Nikki T.; Addawe, Joel M.; David, Guido

    2016-11-01

    One of the major threats in rice agriculture is the Tungro virus, which is transmitted semi-persistently to rice plants via green rice leafhoppers called Nephotettix Virescens. Tungro is polycyclic and complex disease of rice associated by dual infection with Rice Tungro Bacilliform Virus (RTBV) and Rice Tungro Spherical Virus (RTSV). Interaction of the two viruses results in the degeneration of the host. In this paper, we used a plant-vector system of ordinary differential equations to model the spread of the disease in a model rice field. Parameter values were obtained from studies on the entomology of Nephotettix Virescens and infection rates of RTSV and RTBV. The system was analyzed for equilibrium solutions, and solved numerically for susceptible rice varieties (Taichung Native 1).

  1. Sunspot: A program to model the behavior of hypervelocity impact damaged multilayer insulation in the Sunspot thermal vacuum chamber of Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Hayashida, K. B.

    1992-01-01

    The development of a computer program to predict the degradation of the insulating capabilities of the multilayer insulation (MLI) blanket of Space Station Freedom due to a hypervelocity impact with a space debris particle is described. A finite difference scheme is used for the calculations. The computer program was written in Microsoft BASIC. Also described is a test program that was undertaken to validate the numerical model. Twelve MLI specimens were impacted at hypervelocities with simulated debris particles using a light gas gun at Marshall Space Flight Center. The impact-damaged MLI specimens were then tested for insulating capability in the space environment of the Sunspot thermal vacuum chamber at MSFC. Two undamaged MLI specimens were also tested for comparison with the test results of the damaged specimens. The numerical model was found to adequately predict behavior of the MLI specimens in the Sunspot chamber. A parameter, called diameter ratio, was developed to relate the nominal MLI impact damage to the apparent (for thermal analysis purposes) impact damage based on the hypervelocity impact conditions of a specimen.

  2. Early Earth plume-lid tectonics: A high-resolution 3D numerical modelling approach

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Gerya, T.

    2016-10-01

    Geological-geochemical evidence point towards higher mantle potential temperature and a different type of tectonics (global plume-lid tectonics) in the early Earth (>3.2 Ga) compared to the present day (global plate tectonics). In order to investigate tectono-magmatic processes associated with plume-lid tectonics and crustal growth under hotter mantle temperature conditions, we conduct a series of 3D high-resolution magmatic-thermomechanical models with the finite-difference code I3ELVIS. No external plate tectonic forces are applied to isolate 3D effects of various plume-lithosphere and crust-mantle interactions. Results of the numerical experiments show two distinct phases in coupled crust-mantle evolution: (1) a longer (80-100 Myr) and relatively quiet 'growth phase' which is marked by growth of crust and lithosphere, followed by (2) a short (∼20 Myr) and catastrophic 'removal phase', where unstable parts of the crust and mantle lithosphere are removed by eclogitic dripping and later delamination. This modelling suggests that the early Earth plume-lid tectonic regime followed a pattern of episodic growth and removal also called episodic overturn with a periodicity of ∼100 Myr.

  3. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  4. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  5. Coupled electric fields in photorefractive driven liquid crystal hybrid cells - theory and numerical simulation

    NASA Astrophysics Data System (ADS)

    Moszczyński, P.; Walczak, A.; Marciniak, P.

    2016-12-01

    In cyclic articles previously published we described and analysed self-organized light fibres inside a liquid crystalline (LC) cell contained photosensitive polymer (PP) layer. Such asymmetric LC cell we call a hybrid LC cell. Light fibre arises along a laser beam path directed in plane of an LC cell. It means that a laser beam is parallel to photosensitive layer. We observed the asymmetric LC cell response on an external driving field polarization. Observation has been done for an AC field first. It is the reason we decided to carry out a detailed research for a DC driving field to obtain an LC cell response step by step. The properly prepared LC cell has been built with an isolating layer and garbage ions deletion. We proved by means of a physical model, as well as a numerical simulation that LC asymmetric response strongly depends on junction barriers between PP and LC layers. New parametric model for a junction barrier on PP/LC boundary has been proposed. Such model is very useful because of lack of proper conductivity and charge carriers of band structure data on LC material.

  6. Modelling and scale-up of chemical flooding: Second annual report for the period October 1986--September 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1988-11-01

    The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. Developing, testing and applying flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agent has been continued. Improvements in both the physical-chemical and numerical aspects of UTCHEM have been made which enhance its versatility, accuracymore » and speed. Supporting experimental studies during the past year include relative permeability and trapping of microemulsion, tracer flow studies oil recovery in cores using alcohol free surfactant slugs, and microemulsion viscosity measurements. These have enabled model improvement simulator testing. Another code called PROPACK has also been developed which is used as a preprocessor for UTCHEM. Specifically, it is used to evaluate input to UTCHEM by computing and plotting key physical properties such as phase behavior interfacial tension.« less

  7. Numerical Analysis of the Dynamics of Nonlinear Solids and Structures

    DTIC Science & Technology

    2008-08-01

    to arrive to a new numerical scheme that exhibits rigorously the dissipative character of the so-called canonical free en - ergy characteristic of...UCLA), February 14 2006. 5. "Numerical Integration of the Nonlinear Dynamics of Elastoplastic Solids," keynote lecture , 3rd European Conference on...Computational Mechanics (ECCM 3), Lisbon, Portugal, June 5-9 2006. 6. "Energy-Momentum Schemes for Finite Strain Plasticity," keynote lecture , 7th

  8. Comparison of theory and direct numerical simulations of drag reduction by rodlike polymers in turbulent channel flows.

    PubMed

    Benzi, Roberto; Ching, Emily S C; De Angelis, Elisabetta; Procaccia, Itamar

    2008-04-01

    Numerical simulations of turbulent channel flows, with or without additives, are limited in the extent of the Reynolds number (Re) and Deborah number (De). The comparison of such simulations to theories of drag reduction, which are usually derived for asymptotically high Re and De, calls for some care. In this paper we present a study of drag reduction by rodlike polymers in a turbulent channel flow using direct numerical simulation and illustrate how these numerical results should be related to the recently developed theory.

  9. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    PubMed

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  10. Algorithms for radiative transfer simulations for aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko

    2012-11-01

    Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.

  11. Modeling human mobility responses to the large-scale spreading of infectious diseases.

    PubMed

    Meloni, Sandro; Perra, Nicola; Arenas, Alex; Gómez, Sergio; Moreno, Yamir; Vespignani, Alessandro

    2011-01-01

    Current modeling of infectious diseases allows for the study of realistic scenarios that include population heterogeneity, social structures, and mobility processes down to the individual level. The advances in the realism of epidemic description call for the explicit modeling of individual behavioral responses to the presence of disease within modeling frameworks. Here we formulate and analyze a metapopulation model that incorporates several scenarios of self-initiated behavioral changes into the mobility patterns of individuals. We find that prevalence-based travel limitations do not alter the epidemic invasion threshold. Strikingly, we observe in both synthetic and data-driven numerical simulations that when travelers decide to avoid locations with high levels of prevalence, this self-initiated behavioral change may enhance disease spreading. Our results point out that the real-time availability of information on the disease and the ensuing behavioral changes in the population may produce a negative impact on disease containment and mitigation.

  12. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS.

    PubMed

    Sato, Tatsuhiko

    2015-01-01

    By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.

  13. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS

    PubMed Central

    Sato, Tatsuhiko

    2015-01-01

    By extending our previously established model, here we present a new model called “PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0,” which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth’s atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R 2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research. PMID:26674183

  14. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  15. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  16. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  17. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  18. Numerical and Experimental study of secondary flows in a rotating two-phase flow: the tea leaf paradox

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Neal, Douglas; Prevost, Richard; Mayrhofer, Arno; Lawrenz, Alan; Foss, John; Sotiropoulos, Fotis

    2015-11-01

    Secondary flows in a rotating flow in a cylinder, resulting in the so called ``tea leaf paradox'', are fundamental for understanding atmospheric pressure systems, developing techniques for separating red blood cells from the plasma, and even separating coagulated trub in the beer brewing process. We seek to gain deeper insights in this phenomenon by integrating numerical simulations and experiments. We employ the Curvilinear Immersed boundary method (CURVIB) of Calderer et al. (J. Comp. Physics 2014), which is a two-phase flow solver based on the level set method, to simulate rotating free-surface flow in a cylinder partially filled with water as in the tea leave paradox flow. We first demonstrate the validity of the numerical model by simulating a cylinder with a rotating base filled with a single fluid, obtaining results in excellent agreement with available experimental data. Then, we present results for the cylinder case with free surface, investigate the complex formation of secondary flow patterns, and show comparisons with new experimental data for this flow obtained by Lavision. Computational resources were provided by the Minnesota Supercomputing Institute.

  19. PREFACE: Special section featuring selected papers from the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors Special section featuring selected papers from the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors

    NASA Astrophysics Data System (ADS)

    Granados, Xavier; Sánchez, Àlvar; López-López, Josep

    2012-10-01

    The development of superconducting applications and superconducting engineering requires the support of consistent tools which can provide models for obtaining a good understanding of the behaviour of the systems and predict novel features. These models aim to compute the behaviour of the superconducting systems, design superconducting devices and systems, and understand and test the behavior of the superconducting parts. 50 years ago, in 1962, Charles Bean provided the superconducting community with a model efficient enough to allow the computation of the response of a superconductor to external magnetic fields and currents flowing through in an understandable way: the so called critical-state model. Since then, in addition to the pioneering critical-state approach, other tools have been devised for designing operative superconducting systems, allowing integration of the superconducting design in nearly standard electromagnetic computer-aided design systems by modelling the superconducting parts with consideration of time-dependent processes. In April 2012, Barcelona hosted the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors (HTS), the third in a series of workshops started in Lausanne in 2010 and followed by Cambridge in 2011. The workshop reflected the state-of-the-art and the new initiatives of HTS modelling, considering mathematical, physical and technological aspects within a wide and interdisciplinary scope. Superconductor Science and Technology is now publishing a selection of papers from the workshop which have been selected for their high quality. The selection comprises seven papers covering mathematical, physical and technological topics which contribute to an improvement in the development of procedures, understanding of phenomena and development of applications. We hope that they provide a perspective on the relevance and growth that the modelling of HTS superconductors has achieved in the past 25 years.

  20. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  1. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  2. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    PubMed Central

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467

  3. Rapid prototyping and stereolithography in dentistry

    PubMed Central

    Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor

    2015-01-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715

  4. A theoretical model to predict both horizontal displacement and vertical displacement for electromagnetic induction-based deep displacement sensors.

    PubMed

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  5. Rapid prototyping and stereolithography in dentistry.

    PubMed

    Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor

    2015-04-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.

  6. On computing special functions in marine engineering

    NASA Astrophysics Data System (ADS)

    Constantinescu, E.; Bogdan, M.

    2015-11-01

    Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.

  7. CaveMan Enterprise version 1.0 Software Validation and Verification.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, David

    The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prisemore » has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.« less

  8. Thermofluid Analysis of Magnetocaloric Refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan

    While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less

  9. Stability and bifurcation analysis of three-species predator-prey model with non-monotonic delayed predator response

    NASA Astrophysics Data System (ADS)

    Balilo, Aldrin T.; Collera, Juancho A.

    2018-03-01

    In this paper, we consider delayed three-species predator-prey model with non-monotonic functional response where two predator populations feed on a single prey population. Response function in both predator populations includes a time delay which represents the gestation period of the predator populations. We call a positive equlibrium solution of the form E*S=(x*,y*,y*) as a symmetric equilibrium. The goal of this paper is to determine the effect of the difference in gestation periods of predator populations to the local dynamics of symmetric equilibria. Our results include conditions on the existence of equilibrium solutions, and stability and bifurcations of symmetric equilibria as the gestation periods of predator populations are varied. A numerical bifurcation analysis tool is also used to illustrate our results. Stability switch occurs at a Hopf bifurcation. Moreover, a branch of stable periodic solutions, obtained using numerical continuation, emerges from the Hopf bifurcation. This shows that the predator population with longer gestation period oscillates higher than the predator population with shorter gestation period.

  10. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  11. A theoretical analysis of the electromagnetic environment of the AS330 super Puma helicopter external and internal coupling

    NASA Technical Reports Server (NTRS)

    Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.

    1991-01-01

    Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.

  12. Detailed numerical investigation of the Bohm limit in cosmic ray diffusion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein, M.; Shalchi, A., E-mail: m_hussein@physics.umanitoba.ca, E-mail: andreasm4@yahoo.com

    2014-04-10

    A standard model in cosmic ray diffusion theory is the so-called Bohm limit in which the particle mean free path is assumed to be equal to the Larmor radius. This type of diffusion is often employed to model the propagation and acceleration of energetic particles. However, recent analytical and numerical work has shown that standard Bohm diffusion is not realistic. In the present paper, we perform test-particle simulations to explore particle diffusion in the strong turbulence limit in which the wave field is much stronger than the mean magnetic field. We show that there is indeed a lower limit ofmore » the particle mean free path along the mean field. In this limit, the mean free path is directly proportional to the unperturbed Larmor radius like in the traditional Bohm limit, but it is reduced by the factor δB/B {sub 0} where B {sub 0} is the mean field and δB the turbulent field. Although we focus on parallel diffusion, we also explore diffusion across the mean field in the strong turbulence limit.« less

  13. Economic Consequence Analysis of Disasters: The ECAT Software Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Adam; Prager, Fynn; Chen, Zhenhua

    This study develops a methodology for rapidly obtaining approximate estimates of the economic consequences from numerous natural, man-made and technological threats. This software tool is intended for use by various decision makers and analysts to obtain estimates rapidly. It is programmed in Excel and Visual Basic for Applications (VBA) to facilitate its use. This tool is called E-CAT (Economic Consequence Analysis Tool) and accounts for the cumulative direct and indirect impacts (including resilience and behavioral factors that significantly affect base estimates) on the U.S. economy. E-CAT is intended to be a major step toward advancing the current state of economicmore » consequence analysis (ECA) and also contributing to and developing interest in further research into complex but rapid turnaround approaches. The essence of the methodology involves running numerous simulations in a computable general equilibrium (CGE) model for each threat, yielding synthetic data for the estimation of a single regression equation based on the identification of key explanatory variables (threat characteristics and background conditions). This transforms the results of a complex model, which is beyond the reach of most users, into a "reduced form" model that is readily comprehensible. Functionality has been built into E-CAT so that its users can switch various consequence categories on and off in order to create customized profiles of economic consequences of numerous risk events. E-CAT incorporates uncertainty on both the input and output side in the course of the analysis.« less

  14. Numerical method for computing Maass cusp forms on triply punctured two-sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, K. T.; Kamari, H. M.; Zainuddin, H.

    2014-03-05

    A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less

  15. Inhibition of stimulated Raman scattering due to the excitation of stimulated Brillouin scattering

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; Yu, Lu-Le; Weng, Su-Ming; Ren, Chuang; Liu, Chuan-Sheng; Sheng, Zheng-Ming

    2017-09-01

    The nonlinear coupling between stimulated Raman scattering (SRS) and stimulated Brillouin scattering (SBS) of intense laser in underdense plasma is studied theoretically and numerically. Based upon the fluid model, their coupling equations are derived, and a threshold condition of plasma density perturbations due to SBS for the inhibition of SRS is given. Particle-in-cell simulations show that this condition can be achieved easily by SBS in the so-called fluid regime with kLλD<0.15 , where kL is the Langmuir wave number and λD is the Debye length [Kline et al., Phys. Plasmas 13, 055906 (2006)]. SBS can reduce the saturation level of SRS and the temperature of electrons in both homogeneous and inhomogeneous plasma. Numerical simulations also show that this reduced SRS saturation is retained even if the fluid regime condition mentioned above is violated at a later time due to plasma heating.

  16. Memory is relevant in the symmetric phase of the minority game

    NASA Astrophysics Data System (ADS)

    Ho, K. H.; Man, W. C.; Chow, F. K.; Chau, H. F.

    2005-06-01

    Minority game is a simple-mined econophysical model capturing the cooperative behavior among selfish players. Previous investigations, which were based on numerical simulations up to about 100 players for a certain parameter α in the range 0.1≲α≲1 , suggested that memory is irrelevant to the cooperative behavior of the minority game in the so-called symmetric phase. Here using a large scale numerical simulation up to about 3000 players in the parameter range 0.01≲α≲1 , we show that the mean variance of the attendance in the minority game actually depends on the memory in the symmetric phase. We explain such dependence in the framework of crowd-anticrowd theory. Our findings conclude that one should not overlook the feedback mechanism buried under the correlation in the history time series in the study of minority game.

  17. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  18. Numerical simulations of particle orbits around 2060 Chiron

    NASA Technical Reports Server (NTRS)

    Stern, S. A.; Jackson, A. A.; Boice, D. C.

    1994-01-01

    Scattered light from orbiting or coorbiting dust is a primary signature by which Earth-based observers study the activity and atmosphere of the unusual outer solar system object 2060 Chiron. Therefore, it is important to understand the lifetime, dynamics, and loss rates of dust in its coma. We report here dynamical simulations of particles in Chiron's collisionless coma. The orbits of 17,920 dust particles were numerically integrated under the gravitational influence of Chiron, the Sun, and solar radiation pressure. These simulations show that particles ejected from Chiron are more likely to follow suborbital trajectories, or to escape altogether, than to enter quasistable orbits. Significant orbital lifetimes can only be achieved for very specific launch conditions. These results call into question models of a long-term, bound coma generated by discrete outbursts, and instead suggest that Chiron's coma state is closely coupled to the nearly instantaneous level of Chiron's surface activity.

  19. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  20. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  1. Re-Gendering the Social Work Curriculum: New Realities and Complexities

    ERIC Educational Resources Information Center

    McPhail, Beverly A.

    2008-01-01

    With the advent of the 2nd wave of the women's movement, numerous voices within social work academia called for the inclusion of gendered content in the curriculum. The subsequent addition of content on women was a pivotal achievement for the social work profession. However, gender is an increasingly slippery concept. A current call for…

  2. The Urban School Reform Opera: The Obstructions to Transforming School Counseling Practices

    ERIC Educational Resources Information Center

    Militello, Matthew; Janson, Christopher

    2014-01-01

    Over the past 20 years, there have been numerous calls to reform the practices of school counselors. Some have situated these calls for school counseling reform within the context of urban schooling. This study examined the practices of school counselors in one urban school district, and how those practices aligned with the school district's…

  3. Some analytical and numerical approaches to understanding trap counts resulting from pest insect immigration.

    PubMed

    Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei

    2015-05-01

    Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    NASA Astrophysics Data System (ADS)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  5. Thermoviscoelastic characterization and prediction of Kevlar/epoxy composite laminates

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1990-01-01

    The thermoviscoelastic characterization of Kevlar 49/Fiberite 7714A epoxy composite lamina and the development of a numerical procedure to predict the viscoelastic response of any general laminate constructed from the same material were studied. The four orthotropic material properties, S sub 11, S sub 12, S sub 22, and S sub 66, were characterized by 20 minute static creep tests on unidirectional (0) sub 8, (10) sub 8, and (90) sub 16 lamina specimens. The Time-Temperature Superposition-Principle (TTSP) was used successfully to accelerate the characterization process. A nonlinear constitutive model was developed to describe the stress dependent viscoelastic response for each of the material properties. A numerical procedure to predict long term laminate properties from lamina properties (obtained experimentally) was developed. Numerical instabilities and time constraints associated with viscoelastic numerical techniques were discussed and solved. The numerical procedure was incorporated into a user friendly microcomputer program called Viscoelastic Composite Analysis Program (VCAP), which is available for IBM PC type computers. The program was designed for ease of use. The final phase involved testing actual laminates constructed from the characterized material, Kevlar/epoxy, at various temperatures and load level for 4 to 5 weeks. These results were compared with the VCAP program predictions to verify the testing procedure and to check the numerical procedure used in the program. The actual tests and predictions agreed for all test cases which included 1, 2, 3, and 4 fiber direction laminates.

  6. Anisotropic mesh adaptation for marine ice-sheet modelling

    NASA Astrophysics Data System (ADS)

    Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier

    2017-04-01

    Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh refinement. For transient solutions where the GL is moving, we have implemented an algorithm where the computation is reiterated allowing to anticipate the GL displacement and to adapt the mesh to the transient solution. We discuss the performance and robustness of this algorithm.

  7. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  8. Numerical simulation of the observed near-surface East India Coastal Current on the continental slope

    NASA Astrophysics Data System (ADS)

    Mukherjee, A.; Shankar, D.; Chatterjee, Abhisek; Vinayachandran, P. N.

    2018-06-01

    We simulate the East India Coastal Current (EICC) using two numerical models (resolution 0.1° × 0.1°), an oceanic general circulation model (OGCM) called Modular Ocean Model and a simpler, linear, continuously stratified (LCS) model, and compare the simulated current with observations from moorings equipped with acoustic Doppler current profilers deployed on the continental slope in the western Bay of Bengal (BoB). We also carry out numerical experiments to analyse the processes. Both models simulate well the annual cycle of the EICC, but the performance degrades for the intra-annual and intraseasonal components. In a model-resolution experiment, both models (run at a coarser resolution of 0.25° × 0.25°) simulate well the currents in the equatorial Indian Ocean (EIO), but the performance of the high-resolution LCS model as well as the coarse-resolution OGCM, which is good in the EICC regime, degrades in the eastern and northern BoB. An experiment on forcing mechanisms shows that the annual EICC is largely forced by the local alongshore winds in the western BoB and remote forcing due to Ekman pumping over the BoB, but forcing from the EIO has a strong impact on the intra-annual EICC. At intraseasonal periods, local (equatorial) forcing dominates in the south (north) because the Kelvin wave propagates equatorward in the western BoB. A stratification experiment with the LCS model shows that changing the background stratification from EIO to BoB leads to a stronger surface EICC owing to strong coupling of higher order vertical modes with wind forcing for the BoB profiles. These high-order modes, which lead to energy propagating down into the ocean in the form of beams, are important only for the current and do not contribute significantly to the sea level.

  9. Piezothermal effect in a spinning gas

    NASA Astrophysics Data System (ADS)

    Geyko, V. I.; Fisch, N. J.

    2016-10-01

    A spinning gas, heated adiabatically through axial compression, is known to exhibit a rotation-dependent heat capacity. However, as equilibrium is approached, an effect is identified here wherein the temperature does not grow homogeneously in the radial direction, but develops a temperature differential with the hottest region on axis, at the maximum of the centrifugal potential energy. This phenomenon, which we call a piezothermal effect, is shown to grow bilinearly with the compression rate and the amplitude of the potential. Numerical simulations confirm a simple model of this effect, which can be generalized to other forms of potential energy and methods of heating.

  10. A Numerical Model for the Computation of Radiance Distributions in Natural Waters with Wind-Roughened Surfaces, Part 2: User’s Guide and Code Listing

    DTIC Science & Technology

    1988-07-01

    8217) END IF C..... SOLUTION STEP 9 C COMPUTE THE AMPLITUDE A(A.’) C CALL AMPAP c WERKY1) NOW COtJ1AINS ACCA ,’). THE kF.,ELTED DIRECT BEAM C C END OF...FUNCTID IN IS USED;’//, SIGMA(V.COS(PSI)) = )’Y)I(44PI) wHERE/fl 2’ IV V S(V) ALPHA(Y) S,/ALPH-A!) 102 FORMAT ( H 14, F8 .IF8 .3,FI10.3, FlI1.3 END 95 §5

  11. Development of a numerical model for the electric current in burner-stabilised methane-air flames

    NASA Astrophysics Data System (ADS)

    Speelman, N.; de Goey, L. P. H.; van Oijen, J. A.

    2015-03-01

    This study presents a new model to simulate the electric behaviour of one-dimensional ionised flames and to predict the electric currents in these flames. The model utilises Poisson's equation to compute the electric potential. A multi-component diffusion model, including the influence of an electric field, is used to model the diffusion of neutral and charged species. The model is incorporated into the existing CHEM1D flame simulation software. A comparison between the computed electric currents and experimental values from the literature shows good qualitative agreement for the voltage-current characteristic. Physical phenomena, such as saturation and the diodic effect, are captured by the model. The dependence of the saturation current on the equivalence ratio is also captured well for equivalence ratios between 0.6 and 1.2. Simulations show a clear relation between the saturation current and the total number of charged particles created. The model shows that the potential at which the electric field saturates is strongly dependent on the recombination rate and the diffusivity of the charged particles. The onset of saturation occurs because most created charged particles are withdrawn from the flame and because the electric field effects start dominating over mass based diffusion. It is shown that this knowledge can be used to optimise ionisation chemistry mechanisms. It is shown numerically that the so-called diodic effect is caused primarily by the distance the heavier cations have to travel to the cathode.

  12. Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth

    NASA Astrophysics Data System (ADS)

    Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.

    2017-12-01

    We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.

  13. Revealing the physical insight of a length-scale parameter in metamaterials by exploiting the variational formulation

    NASA Astrophysics Data System (ADS)

    Abali, B. Emek

    2018-04-01

    For micro-architectured materials with a substructure, called metamaterials, we can realize a direct numerical simulation in the microscale by using classical mechanics. This method is accurate, however, computationally costly. Instead, a solution of the same problem in the macroscale is possible by means of the generalized mechanics. In this case, no detailed modeling of the substructure is necessary; however, new parameters emerge. A physical interpretation of these metamaterial parameters is challenging leading to a lack of experimental strategies for their determination. In this work, we exploit the variational formulation based on action principles and obtain a direct relation between a parameter used in the kinetic energy and a metamaterial parameter in the case of a viscoelastic model.

  14. Interacting Winds in Eclipsing Symbiotic Systems - The Case Study of EG Andromedae

    NASA Astrophysics Data System (ADS)

    Calabrò, Emanuele

    2014-03-01

    We report the mathematical representation of the so called eccentric eclipse model, whose numerical solutions can be used to obtain the physical parameters of a quiescent eclipsing symbiotic system. Indeed the nebular region produced by the collision of the stellar winds should be shifted to the orbital axis because of the orbital motion of the system. This mechanism is not negligible, and it led us to modify the classical concept of an eclipse. The orbital elements obtained from spectroscopy and photometry of the symbiotic EG Andromedae were used to test the eccentric eclipse model. Consistent values for the unknown orbital elements of this symbiotic were obtained. The physical parameters are in agreement with those obtained by means of other simulations for this system.

  15. Coupled modeling and simulation of electro-elastic materials at large strains

    NASA Astrophysics Data System (ADS)

    Possart, Gunnar; Steinmann, Paul; Vu, Duc-Khoi

    2006-03-01

    In the recent years various novel materials have been developed that respond to the application of electrical loading by large strains. An example is the class of so-called electro-active polymers (EAP). Certainly these materials are technologically very interesting, e.g. for the design of actuators in mechatronics or in the area of artificial tissues. This work focuses on the phenomenological modeling of such materials within the setting of continuum-electro-dynamics specialized to the case of electro-hyperelastostatics and the corresponding computational setting. Thereby a highly nonlinear coupled problem for the deformation and the electric potential has to be considered. The finite element method is applied to solve the underlying equations numerically and some exemplary applications are presented.

  16. Concordance measure and discriminatory accuracy in transformation cure models.

    PubMed

    Zhang, Yilong; Shao, Yongzhao

    2018-01-01

    Many populations of early-stage cancer patients have non-negligible latent cure fractions that can be modeled using transformation cure models. However, there is a lack of statistical metrics to evaluate prognostic utility of biomarkers in this context due to the challenges associated with unknown cure status and heavy censorship. In this article, we develop general concordance measures as evaluation metrics for the discriminatory accuracy of transformation cure models including the so-called promotion time cure models and mixture cure models. We introduce explicit formulas for the consistent estimates of the concordance measures, and show that their asymptotically normal distributions do not depend on the unknown censoring distribution. The estimates work for both parametric and semiparametric transformation models as well as transformation cure models. Numerical feasibility of the estimates and their robustness to the censoring distributions are illustrated via simulation studies and demonstrated using a melanoma data set. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Analytical formulae for computing the critical current of an Nb3Sn strand under bending

    NASA Astrophysics Data System (ADS)

    Ciazynski, D.; Torre, A.

    2010-12-01

    Works on bending strain in Nb3Sn wires were initiated in support of the 'react-and-wind' technique used to manufacture superconducting coils. More recently, the bending strains of Nb3Sn strands in cable-in-conduit conductors (CICC) under high Lorentz forces have been thought to be partly responsible for the degradation of the conductor performance in terms of critical current and n index, particularly for the international thermonuclear experimental reactor (ITER) conductors. This has led to a new wave of experiments and modelling on this subject. The computation of the current transport capability in an Nb3Sn wire under uniform bending used to be carried out through the so-called Ekin's models, and more recently through numerical simulations with electric networks. The flaws of Ekin's models are that they consider only two extreme cases or limits, namely the so-called long twist pitch (LTP) or short twist pitch (STP) cases, and that these models only allow computation of a value for the critical current without reference to the n index of the superconducting filaments (i.e. this index is implicitly assumed to be infinite). Although the numerical models allow a fine description of the wire under operation and can take into account the filament's n index, they need a refined meshing to be accurate enough and their results may be sensitive to boundary conditions (i.e. current injection in the wire), also general intrinsic parameters cannot be easily identified. In this paper, we propose clearly to go further than Ekin's models by developing, from a homogeneous model and Maxwell's equations, an analytical model to establish the general equation governing the evolution of the electric field inside an Nb3Sn strand under uniform bending (with possible longitudinal strain). Within the usual strand fabrication limits, this equation allows the definition of one single parameter to discriminate the STP and LTP cases. It is also shown that whereas Ekin's LTP model corresponds well to a limiting solution of the problem when the transverse resistivity tends toward zero (or the twist pitch tends towards infinity), Ekin's STP model must be modified (improved) when the filament's n index is finite. Since the general equation cannot be solved analytically, we start from the LTP model and develop a first order correction to be applied when the transverse resistivity, the twist pitch and the filament's n index are finite. Using a simple but realistic law for depicting the strain dependence of the critical current density in the Nb3Sn filaments, we can fully compute the corrected expression and give the result under a general analytical formula for a strand submitted to both bending and compressive/tensile strains. The results are then compared in two different cases with those obtained with the numerical code CARMEN (based on an electrical network) developed at CEA. Last, a semi-empirical formula has been developed to evolve continuously from the LTP limit to the improved STP limit when the transverse resistivity evolves from zero to infinity. The results given by this formula are again compared with the numerical simulations in two different cases. Last, comparisons with experimental results are discussed.

  18. Being Numerate: What Counts? A Fresh Look at the Basics.

    ERIC Educational Resources Information Center

    Willis, Sue, Ed.

    To be numerate is to be able to function mathematically in one's daily life. The kinds of mathematics skills and understandings necessary to function effectively in daily life are changing. Despite an awareness in Australia of new skills necessary for the information age and calls that the schools should be instrumental in preparing students with…

  19. Direct pore-scale reactive transport modelling of dynamic wettability changes induced by surface complexation

    NASA Astrophysics Data System (ADS)

    Maes, Julien; Geiger, Sebastian

    2018-01-01

    Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.

  20. DIATOM (Data Initialization and Modification) Library Version 7.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.

    DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less

  1. Numerical simulations of relativistic heavy-ion reactions

    NASA Astrophysics Data System (ADS)

    Daffin, Frank Cecil

    Bulk quantities of nuclear matter exist only in the compact bodies of the universe. There the crushing gravitational forces overcome the Coulomb repulsion in massive stellar collapses. Nuclear matter is subjected to high pressures and temperatures as shock waves propagate and burn their way through stellar cores. The bulk properties of nuclear matter are important parameters in the evolution of these collapses, some of which lead to nucleosynthesis. The nucleus is rich in physical phenomena. Above the Coulomb barrier, complex interactions lead to the distortion of, and as collision energies increase, the destruction of the nuclear volume. Of critical importance to the understanding of these events is an understanding of the aggregate microscopic processes which govern them. In an effort to understand relativistic heavy-ion reactions, the Boltzmann-Uehling-Uhlenbeck (Ueh33) (BUU) transport equation is used as the framework for a numerical model. In the years since its introduction, the numerical model has been instrumental in providing a coherent, microscopic, physical description of these complex, highly non-linear events. This treatise describes the background leading to the creation of our numerical model of the BUU transport equation, details of its numerical implementation, its application to the study of relativistic heavy-ion collisions, and some of the experimental observables used to compare calculated results to empirical results. The formalism evolves the one-body Wigner phase-space distribution of nucleons in time under the influence of a single-particle nuclear mean field interaction and a collision source term. This is essentially the familiar Boltzmann transport equation whose source term has been modified to address the Pauli exclusion principle. Two elements of the model allow extrapolation from the study of nuclear collisions to bulk quantities of nuclear matter: the modification of nucleon scattering cross sections in nuclear matter, and the compressibility of nuclear matter. Both are primarily subject to the short- range portion of the inter-nucleon potential, and do not show strong finite-size effects. To that end, several useful observables are introduced and their behavior, as BUU model parameters are changed, explored. The average, directed, in-plane, transverse momentum distribution in rapidity is the oldest of the observables presented in this work. Its slope at mid- rapidity is called the flow of the event, and well characterizes the interplay of repulsive and attractive elements of the dynamics of the events. The BUU model has been quite successful in its role of illuminating the physics of intermediate energy heavy-ion collisions. Though current numerical implementations suffer from some shortcomings they have nonetheless served the community well.

  2. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  3. Development and Validation of an NPSS Model of a Small Turbojet Engine

    NASA Astrophysics Data System (ADS)

    Vannoy, Stephen Michael

    Recent studies have shown that integrated gas turbine engine (GT)/solid oxide fuel cell (SOFC) systems for combined propulsion and power on aircraft offer a promising method for more efficient onboard electrical power generation. However, it appears that nobody has actually attempted to construct a hybrid GT/SOFC prototype for combined propulsion and electrical power generation. This thesis contributes to this ambition by developing an experimentally validated thermodynamic model of a small gas turbine (˜230 N thrust) platform for a bench-scale GT/SOFC system. The thermodynamic model is implemented in a NASA-developed software environment called Numerical Propulsion System Simulation (NPSS). An indoor test facility was constructed to measure the engine's performance parameters: thrust, air flow rate, fuel flow rate, engine speed (RPM), and all axial stage stagnation temperatures and pressures. The NPSS model predictions are compared to the measured performance parameters for steady state engine operation.

  4. Swimming Behavior and Flow Geometry: A Fluid Mechanical Study of the Feeding Currents in Calanoid Copepods

    NASA Astrophysics Data System (ADS)

    Jiang, Houshuo; Meneveau, Charles; Osborn, Thomas R.

    2003-11-01

    Copepods are small crustaceans living in oceans and fresh waters and play an important role in the marine and freshwater food webs. As they are the biggest biomass in the oceans some call them "the insects of the sea". Previous laboratory observations have shown that the fluid mechanical phenomena occurring at copepod body scale are crucial for the survival of copepods. One of the interesting phenomena is that many calanoid copepods display various behaviors to create the feeding currents for the purpose of capturing food particles. We have developed a fluid mechanical model to study the feeding currents. The model is a self-propelled body model in that the Navier-Stokes equations are properly coupled with the dynamic equations for the copepod's body. The model has been solved both analytically using the Stokes approximation with a spherical body shape and numerically using CFD with a realistic body shape.

  5. Modelling proteins' hidden conformations to predict antibiotic resistance

    NASA Astrophysics Data System (ADS)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  6. Singularities in x-ray spectra of metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahan, G.D.

    1987-08-01

    The x-ray spectroscopies discussed are absorption, emission, and photoemission. The singularities show up in each of them in a different manner. In absorption and emission they show up as power law singularities at the thresholds frequencies. This review will emphasize two themes. First a simple model is proposed to describe this phenomena, which is now called the MND model after MAHAN-NOZIERES-DeDOMINICIS. Exact analytical solutions are now available for this model for the three spectroscopies discussed above. These analytical models can be evaluated numerically in a simple way. The second theme of this review is that great care must be usedmore » when comparing the theory to experiment. A number of factors influence the edge shapes in x-ray spectroscopy. The edge singularities play an important role, and are observed in many matals. Quantitative fits of the theory to experiment require the consideration of other factors. 51 refs.« less

  7. Experimental Investigation of the Formation of Complex Craters

    NASA Astrophysics Data System (ADS)

    Martellato, E.; Dörfler, M. A.; Schuster, B.; Wünnemman, K.; Kenkmann, T.

    2017-09-01

    The formation of complex impact craters is still poorly understood, because standard material models fail to explain the gravity-driven collapse at the observed size-range of a bowl-shaped transient crater into a flat-floored crater structure with a central peak or ring and terraced rim. To explain such a collapse the so-called Acoustic Fluidization (AF) model has been proposed. The AF assumes that heavily fractured target rocks surrounding the transient crater are temporarily softened by an acoustic field in the wake of an expanding shock wave generated upon impact. The AF has been successfully employed in numerous modeling studies of complex crater formation; however, there is no clear relationship between model parameters and observables. In this study, we present preliminary results of laboratory experiments aiming at relating the AF parameters to observables such as the grain size, average wave length of the acoustic field and its decay time τ relative to the crater formation time.

  8. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  9. Lumpy investment, sectoral propagation, and business cycles (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Nirei, Makoto

    2005-05-01

    This paper proposes a model of endogenous fluctuations in investment. A monopolistic producer has an incentive to invest when the aggregate demand is high. The investment at the firm level is also known to exhibit a threshold behavior called an (S,s) policy. These two facts lead us to consider that the fluctuation in aggregate investment is generated by the global coupling of the non-linear oscillators. From this perspective, we characterize the probability distribution of the investment clustering in a partial equilibrium of product markets, and show that its variance can be large enough to match the observed investment fluctuations. We then implement this mechanism in a dynamic general equilibrium model to explore an investment-driven business cycle. By calibrating the model with the SIC 4-digit level industry data, we numerically show that the model replicates the basic structure of the business cycles.

  10. Quasiperiodicity route to chaos in cardiac conduction model

    NASA Astrophysics Data System (ADS)

    Quiroz-Juárez, M. A.; Vázquez-Medina, R.; Ryzhii, E.; Ryzhii, M.; Aragón, J. L.

    2017-01-01

    It has been suggested that cardiac arrhythmias are instances of chaos. In particular that the ventricular fibrillation is a form of spatio-temporal chaos that arises from normal rhythm through a quasi-periodicity or Ruelle-Takens-Newhouse route to chaos. In this work, we modify the heterogeneous oscillator model of cardiac conduction system proposed in Ref. [Ryzhii E, Ryzhii M. A heterogeneous coupled oscillator model for simulation of ECG signals. Comput Meth Prog Bio 2014;117(1):40-49. doi:10.1016/j.cmpb.2014.04.009.], by including an ectopic pacemaker that stimulates the ventricular muscle to model arrhythmias. With this modification, the transition from normal rhythm to ventricular fibrillation is controlled by a single parameter. We show that this transition follows the so-called torus of quasi-periodic route to chaos, as verified by using numerical tools such as power spectrum and largest Lyapunov exponent.

  11. An EOQ model of time quadratic and inventory dependent demand for deteriorated items with partially backlogged shortages under trade credit

    NASA Astrophysics Data System (ADS)

    Singh, Pushpinder; Mishra, Nitin Kumar; Singh, Vikramjeet; Saxena, Seema

    2017-07-01

    In this paper a single buyer, single supplier inventory model with time quadratic and stock dependent demand for a finite planning horizon has been studied. Single deteriorating item which suffers shortage, with partial backlogging and some lost sales is considered. Model is divided into two scenarios, one with non permissible delay in payment and other with permissible delay in payment. Latter is called, centralized system, where supplier offers trade credit to retailer. In the centralized system cost saving is shared amongst the two. The objective is to study the difference in minimum costs borne by retailer and supplier, under two scenarios including the above mentioned parameters. To obtain optimal solution of the problem the model is solved analytically. Numerical example and a comparative study are then discussed supported by sensitivity analysis of each parameter.

  12. An advanced three-phase physical, experimental and numerical method for tsunami induced boulder transport

    NASA Astrophysics Data System (ADS)

    Oetjen, Jan; Engel, Max; Prasad Pudasaini, Shiva; Schüttrumpf, Holger; Brückner, Helmut

    2017-04-01

    Coasts around the world are affected by high-energy wave events like storm surges or tsunamis depending on their regional climatological and geological settings. By focusing on tsunami impacts, we combine the abilities and experiences of different scientific fields aiming at improved insights of near- and onshore tsunami hydrodynamics. We investigate the transport of coarse clasts - so called boulders - due to tsunami impacts by a multi-methodology approach of numerical modelling, laboratory experiments, and sedimentary field records. Coupled numerical hydrodynamic and boulder transport models (BTM) are widely applied for analysing the impact characteristics of the transport by tsunami, such as wave height and flow velocity. Numerical models able to simulate past tsunami events and the corresponding boulder transport patterns with high accuracy and acceptable computational effort can be utilized as powerful forecasting models predicting the impact of a coast approaching tsunami. We have conducted small-scale physical experiments in the tilting flume with real shaped boulder models. Utilizing the structure from motion technique (Westoby et al., 2012) we reconstructed real boulders from a field study on the Island of Bonaire (Lesser Antilles, Caribbean Sea, Engel & May, 2012). The obtained three-dimensional boulder meshes are utilized for creating downscaled replica of the real boulder for physical experiments. The results of the irregular shaped boulder are compared to experiments with regular shaped boulder models to achieve a better insight about the shape related influence on transport patterns. The numerical model is based on the general two-phase mass flow model by Pudasaini (2012) enhanced for boulder transport simulations. The boulder is implemented using the immersed boundary technique (Peskin, 2002) and the direct forcing approach. In this method Cartesian grids (fluid and particle phase) and Lagrangian meshes (boulder) are combined. By applying the immersed boundary method we can compute the interactions between fluid, particles and arbitrary boulder shape. We are able to reproduce the exact physical experiment for calibration and verification of the tsunami boulder transport phenomena. First results of the study will be presented. Engel, M.; May, S.M.: Bonaire's boulder fields revisited: evidence for Holocene tsunami impact on the Leeward, Antilles. Quaternary Science Reviews 54, 126-141, 2012. Peskin, C.S.: The immersed boundary method. Acta Numerica, 479 - 517, 2002. Pudasaini, S. P.: A general two-phase debris flow model. J. Geophys. Res. Earth Surf., 117, F03010, 2012. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M.: 'Structure-from-Motion' photogrammetry - a low-cost, effective tool for geoscience applications. Geomorphology 179, 300-314, 2012.

  13. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  14. Rheological separation of the megathrust seismogenic zone and episodic tremor and slip

    NASA Astrophysics Data System (ADS)

    Gao, Xiang; Wang, Kelin

    2017-03-01

    Episodic tremor and accompanying slow slip, together called ETS, is most often observed in subduction zones of young and warm subducting slabs. ETS should help us to understand the mechanics of subduction megathrusts, but its mechanism is still unclear. It is commonly assumed that ETS represents a transition from seismic to aseismic behaviour of the megathrust with increasing depth, but this assumption is in contradiction with an observed spatial separation between the seismogenic zone and the ETS zone. Here we propose a unifying model for the necessary geological condition of ETS that explains the relationship between the two zones. By developing numerical thermal models, we examine the governing role of thermo-petrologically controlled fault zone rheology (frictional versus viscous shear). High temperatures in the warm-slab environment cause the megathrust seismogenic zone to terminate before reaching the depth of the intersection of the continental Mohorovičić discontinuity (Moho) and the subduction interface, called the mantle wedge corner. High pore-fluid pressures around the mantle wedge corner give rise to an isolated friction zone responsible for ETS. Separating the two zones is a segment of semi-frictional or viscous behaviour. The new model reconciles a wide range of seemingly disparate observations and defines a conceptual framework for the study of slip behaviour and the seismogenesis of major faults.

  15. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  16. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  17. A Novel Model to Simulate Flexural Complements in Compliant Sensor Systems

    PubMed Central

    Tang, Hongyan; Zhang, Dan; Guo, Sheng; Qu, Haibo

    2018-01-01

    The main challenge in analyzing compliant sensor systems is how to calculate the large deformation of flexural complements. Our study proposes a new model that is called the spline pseudo-rigid-body model (spline PRBM). It combines dynamic spline and the pseudo-rigid-body model (PRBM) to simulate the flexural complements. The axial deformations of flexural complements are modeled by using dynamic spline. This makes it possible to consider the nonlinear compliance of the system using four control points. Three rigid rods connected by two revolute (R) pins with two torsion springs replace the three lines connecting the four control points. The kinematic behavior of the system is described using Lagrange equations. Both the optimization and the numerical fitting methods are used for resolving the characteristic parameters of the new model. An example is given of a compliant mechanism to modify the accuracy of the model. The spline PRBM is important in expanding the applications of the PRBM to the design and simulation of flexural force sensors. PMID:29596377

  18. Kinetic simulations and reduced modeling of longitudinal sideband instabilities in non-linear electron plasma waves

    DOE PAGES

    Brunner, S.; Berger, R. L.; Cohen, B. I.; ...

    2014-10-01

    Kinetic Vlasov simulations of one-dimensional finite amplitude Electron Plasma Waves are performed in a multi-wavelength long system. A systematic study of the most unstable linear sideband mode, in particular its growth rate γ and quasi- wavenumber δk, is carried out by scanning the amplitude and wavenumber of the initial wave. Simulation results are successfully compared against numerical and analytical solutions to the reduced model by Kruer et al. [Phys. Rev. Lett. 23, 838 (1969)] for the Trapped Particle Instability (TPI). A model recently suggested by Dodin et al. [Phys. Rev. Lett. 110, 215006 (2013)], which in addition to the TPImore » accounts for the so-called Negative Mass Instability because of a more detailed representation of the trapped particle dynamics, is also studied and compared with simulations.« less

  19. Inflation from periodic extra dimensions

    NASA Astrophysics Data System (ADS)

    Higaki, Tetsutaro; Tatsuta, Yoshiyuki

    2017-07-01

    We discuss a realization of a small field inflation based on string inspired supergravities. In theories accompanying extra dimensions, compactification of them with small radii is required for realistic situations. Since the extra dimension can have a periodicity, there will appear (quasi-)periodic functions under transformations of moduli of the extra dimensions in low energy scales. Such a periodic property can lead to a UV completion of so-called multi-natural inflation model where inflaton potential consists of a sum of multiple sinusoidal functions with a decay constant smaller than the Planck scale. As an illustration, we construct a SUSY breaking model, and then show that such an inflaton potential can be generated by a sum of world sheet instantons in intersecting brane models on extra dimensions containing orbifold. We show also predictions of cosmic observables by numerical analyzes.

  20. Mesoscale Numerical Simulations of the IAS Circulation

    NASA Astrophysics Data System (ADS)

    Mooers, C. N.; Ko, D.

    2008-05-01

    Real-time nowcasts and forecasts of the IAS circulation have been made for several years with mesoscale resolution using the Navy Coastal Ocean Model (NCOM) implemented for the IAS. It is commonly called IASNFS and is driven by the lower resolution Global NCOM on the open boundaries, synoptic atmospheric forcing obtained from the Navy Global Atmospheric Prediction System (NOGAPS), and assimilated satellite-derived sea surface height anomalies and sea surface temperature. Here, examples of the model output are demonstrated; e.g., Gulf of Mexico Loop Current eddy shedding events and the meandering Caribbean Current jet and associated eddies. Overall, IASNFS is ready for further analysis, application to a variety of studies, and downscaling to even higher resolution shelf models. Its output fields are available online through NOAA's National Coastal Data Development Center (NCDDC), located at the Stennis Space Center.

  1. Conditioning of high voltage radio frequency cavities by using fuzzy logic in connection with rule based programming

    NASA Astrophysics Data System (ADS)

    Perreard, S.; Wildner, E.

    1994-12-01

    Many processes are controlled by experts using some kind of mental model to decide on actions and make conclusions. This model, based on heuristic knowledge, can often be represented by rules and does not have to be particularly accurate. Such is the case for the problem of conditioning high voltage RF cavities; the expert has to decide, by observing some criteria, whether to increase or to decrease the voltage and by how much. A program has been implemented which can be applied to a class of similar problems. The kernel of the program is a small rule base, which is independent of the kind of cavity. To model a specific cavity, we use fuzzy logic which is implemented as a separate routine called by the rule base, to translate from numeric to symbolic information.

  2. The Equilibrium State of Colliding Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warnock, R

    2003-12-12

    We study a nonlinear integral equation that is a necessary condition on the equilibrium phase space distribution function of stored, colliding electron beams. It is analogous to the Haissinski equation, being derived from Vlasov-Fokker-Planck theory, but is quite different in form. The equation is analyzed for the case of the Chao-Ruth model of the beam-beam interaction in one degree of freedom, a so-called strong-strong model with nonlinear beam-beam force. We prove existence of a unique solution, for sufficiently small beam current, by an application of the implicit function theorem. We have not yet proved that this solution is positive, asmore » would be required to establish existence of an equilibrium. There is, however, numerical evidence of a positive solution. We expect that our analysis can be extended to more realistic models.« less

  3. Inflation from periodic extra dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higaki, Tetsutaro; Tatsuta, Yoshiyuki, E-mail: thigaki@rk.phys.keio.ac.jp, E-mail: y_tatsuta@akane.waseda.jp

    We discuss a realization of a small field inflation based on string inspired supergravities. In theories accompanying extra dimensions, compactification of them with small radii is required for realistic situations. Since the extra dimension can have a periodicity, there will appear (quasi-)periodic functions under transformations of moduli of the extra dimensions in low energy scales. Such a periodic property can lead to a UV completion of so-called multi-natural inflation model where inflaton potential consists of a sum of multiple sinusoidal functions with a decay constant smaller than the Planck scale. As an illustration, we construct a SUSY breaking model, andmore » then show that such an inflaton potential can be generated by a sum of world sheet instantons in intersecting brane models on extra dimensions containing orbifold. We show also predictions of cosmic observables by numerical analyzes.« less

  4. Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations

    NASA Astrophysics Data System (ADS)

    Padrino, Juan C.; Zhang, Duan Z.

    2016-11-01

    The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.

  5. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitationalmore » acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.« less

  6. Combined Numerical/Analytical Perturbation Solutions of the Navier-Stokes Equations for Aerodynamic Ejector/Mixer Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DeChant, Lawrence Justin

    1998-01-01

    In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.

  7. Trash Diverter Orientation Angle Optimization at Run-Off River Type Hydro-power Plant using CFD

    NASA Astrophysics Data System (ADS)

    Munisamy, Kannan M.; Kamal, Ahmad; Shuaib, Norshah Hafeez; Yusoff, Mohd. Zamri; Hasini, Hasril; Rashid, Azri Zainol; Thangaraju, Savithry K.; Hamid, Hazha

    2010-06-01

    Tenom Pangi Hydro Power Station in Tenom, Sabah is suffering from poor river quality with a lot of suspended trashes. This problem necessitates the need for a trash diverter to divert the trash away from the intake region. Previously, a trash diverter (called Trash Diverter I) was installed at the site but managed to survived for a short period of time due to an impact with huge log as a results of a heavy flood. In the current project, a second trash diverter structure is designed (called Trash Diverter II) with improved features compared to Trash Diverter I. The Computational Fluid Dynamics (CFD) analysis is done to evaluate the river flow interaction onto the trash diverter from the fluid flow point of view, Computational Fluids Dynamics is a numerical approach to solve fluid flow profile for different inlet conditions. In this work, the river geometry is modeled using commercial CFD code, FLUENT®. The computational model consists of Reynolds Averaged Navier-Stokes (RANS) equations coupled with other related models using the properties of the fluids under investigation. The model is validated with site-measurements done at Tenom Pangi Hydro Power Station. Different operating condition of river flow rate and weir opening is also considered. The optimum angle is determined in this simulation to further use the data for 3D simulation and structural analysis.

  8. Optically inspired biomechanical model of the human eyeball.

    PubMed

    Sródka, Wieslaw; Iskander, D Robert

    2008-01-01

    Currently available biomechanical models of the human eyeball focus mainly on the geometries and material properties of its components while little attention has been given to its optics--the eye's primary function. We postulate that in the evolution process, the mechanical structure of the eyeball has been influenced by its optical functions. We develop a numerical finite element analysis-based model in which the eyeball geometry and its material properties are linked to the optical functions of the eye. This is achieved by controlling in the model all essential optical functions while still choosing material properties from a range of clinically available data. In particular, it is assumed that in a certain range of intraocular pressures, the eye is able to maintain focus. This so-called property of optical self-adjustments provides a more constrained set of numerical solutions in which the number of free model parameters significantly decreases, leading to models that are more robust. Further, we investigate two specific cases of a model that satisfies optical self-adjustment: (1) a full model in which the cornea is flexibly attached to sclera at the limbus, and (2) a fixed cornea model in which the cornea is not allowed to move at the limbus. We conclude that for a biomechanical model of the eyeball to mimic the optical function of a real eye, it is crucial that the cornea is allowed to move at the limbal junction, that the materials used for the cornea and sclera are strongly nonlinear, and that their moduli of elasticity remain in a very close relationship.

  9. Projection methods for line radiative transfer in spherical media.

    NASA Astrophysics Data System (ADS)

    Anusha, L. S.; Nagendra, K. N.

    An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).

  10. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-07-01

    The study of the electrodynamics of static, axisymmetric, and force-free Kerr magnetospheres relies vastly on solutions of the so-called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give a detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established set-ups (split-monopole, paraboloidal, BH disc, uniform).

  11. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: Numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-04-01

    The study of the electrodynamics of static, axisymmetric and force-free Kerr magnetospheres relies vastly on solutions of the so called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established setups (split-monopole, paraboloidal, BH-disk, uniform).

  12. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical model, this numerical scheme is also efficient in terms of CPU time. Eventually, simpler models can locally replace the more complex model in order to simplify the overall computation, using some appropriate local error indicators developed in [5], without reducing the accuracy. References 1. Ishii, M., Hibiki, T., Thermo-fluid dynamics of two-phase flow, Springer, New-York, 2006. 2. Gallouët, T. and Hérard, J.-M., Seguin, N., Numerical modeling of two-phase flows using the two-fluid two-pressure approach, Math. Models Methods Appl. Sci., Vol. 14, 2004. 3. Seguin, N., Étude d'équations aux dérivées partielles hyperboliques en mécanique des fluides, Habilitation à diriger des recherches, UPMC-Paris 6, 2011. 4. Coquel, F., Hérard, J-M., Saleh, K., Seguin, N., A Robust Entropy-Satisfying Finite Volume Scheme for the Isentropic Baer-Nunziato Model, ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 48, 2013. 5. Mathis, H., Cancès, C., Godlewski, E., Seguin, N., Dynamic model adaptation for multiscale simulation of hyperbolic systems with relaxation, preprint, 2013.

  13. High order ADER schemes for a unified first order hyperbolic formulation of continuum mechanics: Viscous heat-conducting fluids and elastic solids

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Peshkov, Ilya; Romenski, Evgeniy; Zanotti, Olindo

    2016-06-01

    This paper is concerned with the numerical solution of the unified first order hyperbolic formulation of continuum mechanics recently proposed by Peshkov and Romenski [110], further denoted as HPR model. In that framework, the viscous stresses are computed from the so-called distortion tensor A, which is one of the primary state variables in the proposed first order system. A very important key feature of the HPR model is its ability to describe at the same time the behavior of inviscid and viscous compressible Newtonian and non-Newtonian fluids with heat conduction, as well as the behavior of elastic and visco-plastic solids. Actually, the model treats viscous and inviscid fluids as generalized visco-plastic solids. This is achieved via a stiff source term that accounts for strain relaxation in the evolution equations of A. Also heat conduction is included via a first order hyperbolic system for the thermal impulse, from which the heat flux is computed. The governing PDE system is hyperbolic and fully consistent with the first and the second principle of thermodynamics. It is also fundamentally different from first order Maxwell-Cattaneo-type relaxation models based on extended irreversible thermodynamics. The HPR model represents therefore a novel and unified description of continuum mechanics, which applies at the same time to fluid mechanics and solid mechanics. In this paper, the direct connection between the HPR model and the classical hyperbolic-parabolic Navier-Stokes-Fourier theory is established for the first time via a formal asymptotic analysis in the stiff relaxation limit. From a numerical point of view, the governing partial differential equations are very challenging, since they form a large nonlinear hyperbolic PDE system that includes stiff source terms and non-conservative products. We apply the successful family of one-step ADER-WENO finite volume (FV) and ADER discontinuous Galerkin (DG) finite element schemes to the HPR model in the stiff relaxation limit, and compare the numerical results with exact or numerical reference solutions obtained for the Euler and Navier-Stokes equations. Numerical convergence results are also provided. To show the universality of the HPR model, the paper is rounded-off with an application to wave propagation in elastic solids, for which one only needs to switch off the strain relaxation source term in the governing PDE system. We provide various examples showing that for the purpose of flow visualization, the distortion tensor A seems to be particularly useful.

  14. An improved numerical method to compute neutron/gamma deexcitation cascades starting from a high spin state

    DOE PAGES

    Regnier, D.; Litaize, O.; Serot, O.

    2015-12-23

    Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less

  15. Prize of the best thesis 2015: Study of debris discs through state-of-the-art numerical modelling

    NASA Astrophysics Data System (ADS)

    Kral, Q.; Thébault, P.

    2015-12-01

    This proceeding summarises the thesis entitled ``Study of debris discs with a new generation numerical model'' by Quentin Kral, for which he obtained the prize of the best thesis in 2015. The thesis brought major contributions to the field of debris disc modelling. The main achievement is to have created, almost ex-nihilo, the first truly self-consistent numerical model able to simultaneously follow the coupled collisional and dynamical evolutions of debris discs. Such a code has been thought as being the ``Holy Grail'' of disc modellers for the past decade, and while several codes with partial dynamics/collisions coupling have been presented, the code developed in this thesis, called ``LIDT-DD'' is the first to achieve a full coupling. The LIDT-DD model, which is the first of a new-generation of fully self-consistent debris disc models is able to handle both planetesimals and dust and create new fragments after each collision. The main idea of LIDT-DD development was to merge into one code two approaches that were so far used separately in disc modelling, that is, an N-body algorithm to investigate the dynamics, and a statistical scheme to explore the collisional evolution. This complex scheme is not straightforward to develop as there are major difficulties to overcome: 1) collisions in debris discs are highly destructive and produce clouds of small fragments after each single impact, 2) the smallest (and most numerous) of these fragments have a strongly size-dependent dynamics because of the radiation pressure, and 3) the dust usually observed in discs is precisely these smallest grains. These extreme constraints had so far prevented all previous attempts at developing self-consistent disc models to succeed. The thesis contains many examples of the use of LIDT-DD that are not yet published but the case of the collision between two asteroid-like bodies is studied in detail. In particular, LIDT-DD is able to predict the different stages that should be observed after such massive collisions that happen mainly in the latest stages of planetary formation. Some giant impact signatures and observability predictions for VLT/SPHERE and JWST/MIRI are given. JWST should be able to detect many of such impacts and would enable to see on-going planetary formation in dozens of planetary systems.

  16. Non-steady state simulation of BOM removal in drinking water biofilters: model development.

    PubMed

    Hozalski, R M; Bouwer, E J

    2001-01-01

    A numerical model was developed to simulate the non-steady-state behavior of biologically-active filters used for drinking water treatment. The biofilter simulation model called "BIOFILT" simulates the substrate (biodegradable organic matter or BOM) and biomass (both attached and suspended) profiles in a biofilter as a function of time. One of the innovative features of BIOFILT compared to previous biofilm models is the ability to simulate the effects of a sudden loss in attached biomass or biofilm due to filter backwash on substrate removal performance. A sensitivity analysis of the model input parameters indicated that the model simulations were most sensitive to the values of parameters that controlled substrate degradation and biofilm growth and accumulation including the substrate diffusion coefficient, the maximum rate of substrate degradation, the microbial yield coefficient, and a dimensionless shear loss coefficient. Variation of the hydraulic loading rate or other parameters that controlled the deposition of biomass via filtration did not significantly impact the simulation results.

  17. Some observations on the mechanism of aircraft wing rock

    NASA Technical Reports Server (NTRS)

    Hwang, C.; Pi, W. S.

    1979-01-01

    A scale model of the Northrop F-5A was tested in NASA Ames Research Center Eleven-Foot Transonic Tunnel to simulate the wing rock oscillations in a transonic maneuver. For this purpose, a flexible model support device was designed and fabricated, which allowed the model to oscillate in roll at the scaled wing rock frequency. Two tunnel entries were performed to acquire the pressure (steady state and fluctuating) and response data when the model was held fixed and when it was excited by flow to oscillate in roll. Based on these data, a limit cycle mechanism was identified, which supplied energy to the aircraft model and caused the Dutch roll type oscillations, commonly called wing rock. The major origin of the fluctuating pressures that contributed to the limit cycle was traced to the wing surface leading edge stall and the subsequent lift recovery. For typical wing rock oscillations, the energy balance between the pressure work input and the energy consumed by the model's aerodynamic and mechanical damping was formulated and numerical data presented.

  18. Some observations on the mechanism of aircraft wing rock

    NASA Technical Reports Server (NTRS)

    Hwang, C.; Pi, W. S.

    1978-01-01

    A pressure scale model of Northrop F-5A was tested in NASA Ames Research Center Eleven-Foot Transonic Tunnel to simulate the wing rock oscillations in a transonic maneuver. For this purpose, a flexible model support device was designed and fabricated which allowed the model to oscillate in roll at the scaled wing rock frequency. Two tunnel entries were performed to acquire the pressure (steady state and fluctuating) and response data when the model was held fixed and when it was excited by flow to oscillate in roll. Based on these data, a limit cycle mechanism was identified which supplied energy to the aircraft model and caused the Dutch roll type oscillations, commonly called wing rock. The major origin of the fluctuating pressures which contributed to the limit cycle was traced to the wing surface leading edge stall and the subsequent lift recovery. For typical wing rock oscillations, the energy balance between the pressure work input and the energy consumed by the model aerodynamic and mechanical damping was formulated and numerical data presented.

  19. Characterizing super-spreading in microblog: An epidemic-based information propagation model

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wang, Bai; Wu, Bin; Shang, Suiming; Zhang, Yunlei; Shi, Chuan

    2016-12-01

    As the microblogging services are becoming more prosperous in everyday life for users on Online Social Networks (OSNs), it is more favorable for hot topics and breaking news to gain more attraction very soon than ever before, which are so-called "super-spreading events". In the information diffusion process of these super-spreading events, messages are passed on from one user to another and numerous individuals are influenced by a relatively small portion of users, a.k.a. super-spreaders. Acquiring an awareness of super-spreading phenomena and an understanding of patterns of wide-ranged information propagations benefits several social media data mining tasks, such as hot topic detection, predictions of information propagation, harmful information monitoring and intervention. Taking into account that super-spreading in both information diffusion and spread of a contagious disease are analogous, in this study, we build a parameterized model, the SAIR model, based on well-known epidemic models to characterize super-spreading phenomenon in tweet information propagation accompanied with super-spreaders. For the purpose of modeling information diffusion, empirical observations on a real-world Weibo dataset are statistically carried out. Both the steady-state analysis on the equilibrium and the validation on real-world Weibo dataset of the proposed model are conducted. The case study that validates the proposed model shows that the SAIR model is much more promising than the conventional SIR model in characterizing a super-spreading event of information propagation. In addition, numerical simulations are carried out and discussed to discover how sensitively the parameters affect the information propagation process.

  20. Designing a light fabric metamaterial being highly macroscopically tough under directional extension: first experimental evidence

    NASA Astrophysics Data System (ADS)

    dell'Isola, Francesco; Lekszycki, Tomasz; Pawlikowski, Marek; Grygoruk, Roman; Greco, Leopoldo

    2015-12-01

    In this paper, we study a metamaterial constructed with an isotropic material organized following a geometric structure which we call pantographic lattice. This relatively complex fabric was studied using a continuous model (which we call pantographic sheet) by Rivlin and Pipkin and includes two families of flexible fibers connected by internal pivots which are, in the reference configuration, orthogonal. A rectangular specimen having one side three times longer than the other is cut at 45° with respect to the fibers in reference configuration, and it is subjected to large-deformation plane-extension bias tests imposing a relative displacement of shorter sides. The continuum model used, the presented numerical models and the extraordinary advancements of the technology of 3D printing allowed for the design of some first experiments, whose preliminary results are shown and seem to be rather promising. Experimental evidence shows three distinct deformation regimes. In the first regime, the equilibrium total deformation energy depends quadratically on the relative displacement of terminal specimen sides: Applied resultant force depends linearly on relative displacement. In the second regime, the applied force varies nonlinearly on relative displacement, but the behavior remains elastic. In the third regime, damage phenomena start to occur until total failure, but the exerted resultant force continues to be increasing and reaches a value up to several times larger than the maximum shown in the linear regime before failure actually occurs. Moreover, the total energy needed to reach structural failure is larger than the maximum stored elastic energy. Finally, the volume occupied by the material in the fabric is a small fraction of the total volume, so that the ratio weight/resistance to extension is very advantageous. The results seem to require a refinement of the used theoretical and numerical methods to transform the presented concept into a promising technological prototype.

  1. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  2. Strongly interacting dynamics beyond the standard model on a space-time lattice.

    PubMed

    Lucini, Biagio

    2010-08-13

    Strong theoretical arguments suggest that the Higgs sector of the standard model of electroweak interactions is an effective low-energy theory, with a more fundamental theory expected to emerge at an energy scale of the order of a teraelectronvolt. One possibility is that the more fundamental theory is strongly interacting and the Higgs sector is given by the low-energy dynamics of the underlying theory. I review recent works aimed at determining observable quantities by numerical simulations of strongly interacting theories proposed in the literature to explain the electroweak symmetry-breaking mechanism. These investigations are based on Monte Carlo simulations of the theory formulated on a space-time lattice. I focus on the so-called minimal walking technicolour scenario, an SU(2) gauge theory with two flavours of fermions in the adjoint representation. The emerging picture is that this theory has an infrared fixed point that dominates the large-distance physics. I shall discuss the first numerical determinations of quantities of phenomenological interest for this theory and analyse future directions of quantitative studies of strongly interacting theories beyond the standard model with lattice techniques. In particular, I report on a finite size scaling determination of the chiral condensate anomalous dimension gamma, for which 0.05 < or = gamma < or = 0.25.

  3. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  4. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  5. Prediction Of The Fracture Due To Mannesmann Effect In Tube Piercing

    NASA Astrophysics Data System (ADS)

    Fanini, S.; Ghiotti, A.; Bruschi, S.

    2007-05-01

    Mannesmann piercing process is a well-known hot rolling process used for seamless tube production. Its special feature is the so-called Mannesmann effect, that is the cavity formation in the center of the cylindrical billet and its propagation along the axis due to stress state caused by the rolls in the early stages of the process. The cavity is then expanded and sized in its internal diameter by an incoming plug. The industrial requirement is to know quite precisely the characteristics of the cavity especially in terms of its location along the billet axis in order to minimize the plug wear and the oxidation of the pierced bar. However, the scientific knowledge about the fracture mechanism leading to the Mannesmann effect is still limited, even if several theories have been proposed; this lack makes the design and optimization of the process through numerical simulations still a challenging task. The aim of this work is then to develop a suitably calibrated FE model of the piercing process in its first stage before the plug arrival, in order to investigate the Mannesmann effect using different damage criteria. Hot tensile tests, capable to reproduce the industrial conditions in terms of temperature, strain rate, and stress states, are carried out to investigate the material workability and to determine the parameters of the damage models on specimens machined from continuous-casting steel billets. The calculated parameters are implemented in the numerical model of the process and a sensitivity analysis to the different criteria is carried out, comparing numerical results with non-plug piercing tests conducted in the industrial plant.

  6. A numerical investigation of wave-breaking-induced turbulent coherent structure under a solitary wave

    NASA Astrophysics Data System (ADS)

    Zhou, Zheyu; Sangermano, Jacob; Hsu, Tian-Jian; Ting, Francis C. K.

    2014-10-01

    To better understand the effect of wave-breaking-induced turbulence on the bed, we report a 3-D large-eddy simulation (LES) study of a breaking solitary wave in spilling condition. Using a turbulence-resolving approach, we study the generation and the fate of wave-breaking-induced turbulent coherent structures, commonly known as obliquely descending eddies (ODEs). Specifically, we focus on how these eddies may impinge onto bed. The numerical model is implemented using an open-source CFD library of solvers, called OpenFOAM, where the incompressible 3-D filtered Navier-Stokes equations for the water and the air phases are solved with a finite volume scheme. The evolution of the water-air interfaces is approximated with a volume of fluid method. Using the dynamic Smagorinsky closure, the numerical model has been validated with wave flume experiments of solitary wave breaking over a 1/50 sloping beach. Simulation results show that during the initial overturning of the breaking wave, 2-D horizontal rollers are generated, accelerated, and further evolve into a couple of 3-D hairpin vortices. Some of these vortices are sufficiently intense to impinge onto the bed. These hairpin vortices possess counter-rotating and downburst features, which are key characteristics of ODEs observed by earlier laboratory studies using Particle Image Velocimetry. Model results also suggest that those ODEs that impinge onto bed can induce strong near-bed turbulence and bottom stress. The intensity and locations of these near-bed turbulent events could not be parameterized by near-surface (or depth integrated) turbulence unless in very shallow depth.

  7. Issues related to the Fermion mass problem

    NASA Astrophysics Data System (ADS)

    Murakowski, Janusz Adam

    1998-09-01

    This thesis is divided into three parts. Each illustrates a different aspect of the fermion mass issue in elementary particle physics. In the first part, the possibility of chiral symmetry breaking in the presence of uniform magnetic and electric fields is investigated. The system is studied nonperturbatively with the use of basis functions compatible with the external field configuration, the parabolic cylinder functions. It is found that chiral symmetry, broken by a uniform magnetic field, is restored by electric field. Obtained result is nonperturbative in nature: even the tiniest deviation of the electric field from zero restores chiral symmetry. In the second part, heavy quarkonium systems are investigated. To study these systems, a phenomenological nonrelativistic model is built. Approximate solutions to this model are found with the use of a specially designed Pade approximation and by direct numerical integration of Schrodinger equation. The results are compared with experimental measurements of respective meson masses. Good agreement between theoretical calculations and experimental results is found. Advantages and shortcommings of the new approximation method are analysed. In the third part, an extension of the standard model of elementary particles is studied. The extension, called the aspon model, was originally introduced to cure the so called strong CP problem. In addition to fulfilling its original purpose, the aspon model modifies the couplings of the standard model quarks to the Z boson. As a result, the decay rates of the Z boson to quarks are altered. By using the recent precise measurements of the decay rates Z → bb and Z /to [/it c/=c], new constraints on the aspon model parameters are found.

  8. SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.

    PubMed

    Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee

    2009-10-01

    This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.

  9. Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.

    2007-01-01

    Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.

  10. On dynamics of integrate-and-fire neural networks with conductance based synapses.

    PubMed

    Cessac, Bruno; Viéville, Thierry

    2008-01-01

    We present a mathematical analysis of networks with integrate-and-fire (IF) neurons with conductance based synapses. Taking into account the realistic fact that the spike time is only known within some finite precision, we propose a model where spikes are effective at times multiple of a characteristic time scale delta, where delta can be arbitrary small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and IF models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.

  11. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  12. Application of Gaussian beam ray-equivalent model and back-propagation artificial neural network in laser diode fast axis collimator assembly.

    PubMed

    Yu, Hao; Rossi, Giammarco; Braglia, Andrea; Perrone, Guido

    2016-08-10

    The paper presents the development of a tool based on a back-propagation artificial neural network to assist in the accurate positioning of the lenses used to collimate the beam from semiconductor laser diodes along the so-called fast axis. After training using a Gaussian beam ray-equivalent model, the network is capable of indicating the tilt, decenter, and defocus of such lenses from the measured field distribution, so the operator can determine the errors with respect to the actual lens position and optimize the diode assembly procedure. An experimental validation using a typical configuration exploited in multi-emitter diode module assembly and fast axis collimating lenses with different focal lengths and numerical apertures is reported.

  13. Electric field computation and measurements in the electroporation of inhomogeneous samples

    NASA Astrophysics Data System (ADS)

    Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta

    2017-12-01

    In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.

  14. Investigation occurrences of turing pattern in Schnakenberg and Gierer-Meinhardt equation

    NASA Astrophysics Data System (ADS)

    Nurahmi, Annisa Fitri; Putra, Prama Setia; Nuraini, Nuning

    2018-03-01

    There are several types of animals with unusual, varied patterns on their skin. The skin pigmentation system influences this in the animal. On the other side, in 1950 Alan Turing formulated the mathematical theory of morphogenesis, where this model can bring up a spatial pattern or so-called Turing pattern. This research discusses the identification of Turing's model that can produce animal skin pattern. Investigations conducted on two types of equations: Schnakenberg (1979), and Gierer-Meinhardt (1972). In this research, parameters were explored to produce Turing's patter on that both equation. The numerical simulation in this research done using Neumann Homogeneous and Dirichlet Homogeneous boundary condition. The investigation of Schnakenberg equation yielded poison dart frog (Andinobates dorisswansonae) and ladybird (Coccinellidae septempunctata) pattern while skin fish pattern was showed by Gierer-Meinhardt equation.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova, T; Carlsen, T

    We present a study, based on simulations with SERDYCA, a spatially-explicit individual-based model of rodent dynamics, on the relation between population persistence and the presence of numerous isolated disturbances in the habitat. We are specifically interested in the effect of disturbances that do not fragment the environment on population persistence. Our results suggest that the presence of disturbances in the absence of fragmentation can actually increase the average time to extinction of the modeled population. The presence of disturbances decreases population density but can increase the chance for mating in monogamous species and consequently, the ratio of juveniles in themore » population. It thus provides a better chance for the population to restore itself after a severe period with critically low population density. We call this the ''disturbance-forced localization effect''.« less

  16. Cavity radiation model for solar central receivers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipps, F.W.

    1981-01-01

    The Energy Laboratory of the University of Houston has developed a computer simulation program called CREAM (i.e., Cavity Radiations Exchange Analysis Model) for application to the solar central receiver system. The zone generating capability of CREAM has been used in several solar re-powering studies. CREAM contains a geometric configuration factor generator based on Nusselt's method. A formulation of Nusselt's method provides support for the FORTRAN subroutine NUSSELT. Numerical results from NUSSELT are compared to analytic values and values from Sparrow's method. Sparrow's method is based on a double contour integral and its reduction to a single integral which is approximatedmore » by Guassian methods. Nusselt's method is adequate for the intended engineering applications, but Sparrow's method is found to be an order of magnitude more efficient in many situations.« less

  17. Spread of epidemic disease on networks

    NASA Astrophysics Data System (ADS)

    Newman, M. E.

    2002-07-01

    The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible/infective/removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.

  18. Mode locking and quasiperiodicity in a discrete-time Chialvo neuron model

    NASA Astrophysics Data System (ADS)

    Wang, Fengjuan; Cao, Hongjun

    2018-03-01

    The two-dimensional parameter spaces of a discrete-time Chialvo neuron model are investigated. Our studies demonstrate that for all our choice of two parameters (i) the fixed point is destabilized via Neimark-Sacker bifurcation; (ii) there exist mode locking structures like Arnold tongues and shrimps, with periods organized in a Farey tree sequence, embedded in quasiperiodic/chaotic region. We determine analytically the location of the parameter sets where Neimark-Sacker bifurcation occurs, and the location on this curve where Arnold tongues of arbitrary period are born. Properties of the transition that follows the so-called two-torus from quasiperiodicity to chaos are presented clearly and proved strictly by using numerical simulations such as bifurcation diagrams, the largest Lyapunov exponent diagram on MATLAB and C++.

  19. Model reduction by weighted Component Cost Analysis

    NASA Technical Reports Server (NTRS)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  20. Fates of the most massive primordial stars

    NASA Astrophysics Data System (ADS)

    Chen, Ke-Jung; Heger, Alexander; Almgren, Ann; Woosley, Stan

    2012-09-01

    We present our results of numerical simulations of the most massive primordial stars. For the extremely massive non-rotating Pop III stars over 300Msolar, they would simply die as black holes. But the Pop III stars with initial masses 140 - 260Msolar may have died as gigantic explosions called pair-instability supernovae (PSNe). We use a new radiation-hydrodynamics code CASTRO to study evolution of PSNe. Our models follow the entire explosive burning and the explosion until the shock breaks out from the stellar surface. In our simulations, we find that fluid instabilities occurred during the explosion. These instabilities are driven by both nuclear burning and hydrodynamical instability. In the red supergiant models, fluid instabilities can lead to significant mixing of supernova ejecta and alter the observational signature.

  1. Mechanical characterization and modeling of sponge-reinforced hydrogel composites under compression.

    PubMed

    Wu, Lei; Mao, Guoyong; Nian, Guodong; Xiang, Yuhai; Qian, Jin; Qu, Shaoxing

    2018-05-30

    Load-bearing applications of hydrogels call for materials with excellent mechanical properties. Despite the considerable progress in developing tough hydrogels, there is still a requirement to prepare high-performance hydrogels using simple strategies. In this paper, a sponge-reinforced hydrogel composite is synthesized by combining poly(acrylamide) (PAAm) hydrogel and polyurethane (PU) sponge. Uniaxial compressive testing of the hydrogel composites reveals that both the compressive modulus and the strength of the hydrogel composites are much higher than those of the PAAm hydrogel or sponge. In order to predict the compressive modulus of the hydrogel composite, we develop a theoretical model that is validated by experiments and numerical simulations. The present work may guide the design and manufacture of hydrogel-based composite materials, especially for biomaterial scaffolds and soft transducers.

  2. Numerical Modeling of Medium Term Morphological Changes at Manavgat River Mouth Due to Combined Action of Waves and River Discharges

    NASA Astrophysics Data System (ADS)

    Demirci, E.; Baykal, C.; Guler, I.

    2016-12-01

    In this study, hydrodynamic conditions due to river discharge, wave action and sea level fluctuations within a seven month period and the morphological response of the Manavgat river mouth are modeled with XBeach, a two-dimensional depth-averaged (2DH) numerical model developed to compute the natural coastal response during time-varying storm and hurricane conditions (Roelvink et al., 2010). The study area shows an active behavior on its nearshore morphology, thus, two jetties were constructed at the river mouth between years 1996-2000. Recently, Demirci et al. (2016) has studied the impacts of an excess river discharge and concurrent wave action and tidal fluctuations on the Manavgat river mouth morphology for the duration of 12 days (December 4th and 15th, 1998) while the construction of jetties were carried on. It is concluded that XBeach has presumed the final morphology fairly well with the calibrated set of input parameters. Here, the river mouth modeled at a further past date before the construction of jetties with the similar set of input parameters (between August 1st, 1995-March 8th, 1996) to reveal the drastic morphologic change near the mouth due to high river discharge and severe storms happened in a longer period of time. Wave climate effect is determined with the wave hindcasting model, W61, developed by Middle East Technical University-OERC with the NCEP-CFSR wind data as well as the sea level data. River discharge, wave and sea level data are introduced as input parameters in the XBeach numerical model and the final output morphological change is compared with the final bed level measurements. References:Demirci, E., Baykal, C., Guler, I., Ergin, A., & Sogut, E. (postponed). Numerical Modelling on Hydrodynamic Flow Conditions and Morphological Changes Using XBeach Near Manavgat River Mouth. Accepted as Oral presentation at the 35thInt. Conf. on Coastal Eng., Istanbul, Turkey. Guler, I., Ergin, A., Yalçıner, A. C., (2003). Monitoring Sediment Transport Processes at Manavgat River Mouth, Antalya Turkey. COPEDEC VI, 2003, Colombo, Sri Lanka Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de Vries, J., Lescinski, J. and McCall, R., (2010). XBeach Model Description and Manual. Unesco-IHE Institute for Water Education, Deltares and Delft Univ. of Technology. Report June, 21, 2010 version 6.

  3. Numerical emulation of Thru-Reflection-Line calibration for the de-embedding of Surface Acoustic Wave devices.

    PubMed

    Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L

    2018-06-18

    In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.

  4. Free Vibration Analysis of DWCNTs Using CDM and Rayleigh-Schmidt Based on Nonlocal Euler-Bernoulli Beam Theory

    PubMed Central

    2014-01-01

    The free vibration response of double-walled carbon nanotubes (DWCNTs) is investigated. The DWCNTs are modelled as two beams, interacting between them through the van der Waals forces, and the nonlocal Euler-Bernoulli beam theory is used. The governing equations of motion are derived using a variational approach and the free frequencies of vibrations are obtained employing two different approaches. In the first method, the two double-walled carbon nanotubes are discretized by means of the so-called “cell discretization method” (CDM) in which each nanotube is reduced to a set of rigid bars linked together by elastic cells. The resulting discrete system takes into account nonlocal effects, constraint elasticities, and the van der Waals forces. The second proposed approach, belonging to the semianalytical methods, is an optimized version of the classical Rayleigh quotient, as proposed originally by Schmidt. The resulting conditions are solved numerically. Numerical examples end the paper, in which the two approaches give lower-upper bounds to the true values, and some comparisons with existing results are offered. Comparisons of the present numerical results with those from the open literature show an excellent agreement. PMID:24715807

  5. Effects of symbol type and numerical distance on the human event-related potential.

    PubMed

    Jiang, Ting; Qiao, Sibing; Li, Jin; Cao, Zhongyu; Gao, Xuefei; Song, Yan; Xue, Gui; Dong, Qi; Chen, Chuansheng

    2010-01-01

    This study investigated the influence of the symbol type and numerical distance of numbers on the amplitudes and peak latencies of event-related potentials (ERPs). Our aim was to (1) determine the point in time of magnitude information access in visual number processing; and (2) identify at what stage the advantage of Arabic digits over Chinese verbal numbers occur. ERPs were recorded from 64 scalp sites while subjects (n=26) performed a classification task. Results showed that larger ERP amplitudes were elicited by numbers with distance-close condition in comparison to distance-far condition in the VPP component over centro-frontal sites. Furthermore, the VPP latency varied as a function of the symbol type, but the N170 did not. Such results demonstrate that magnitude information access takes place as early as 150 ms after onset of visual number stimuli and the advantage of Arabic digits over verbal numbers should be localized to the VPP component. We establish the VPP component as a critical ERP component to report in studies of numerical cognition and our results call into question the N170/VPP association hypothesis and the serial-stage model of visual number comparison processing.

  6. Lagrangian motion, coherent structures, and lines of persistent material strain.

    PubMed

    Samelson, R M

    2013-01-01

    Lagrangian motion in geophysical fluids may be strongly influenced by coherent structures that support distinct regimes in a given flow. The problems of identifying and demarcating Lagrangian regime boundaries associated with dynamical coherent structures in a given velocity field can be studied using approaches originally developed in the context of the abstract geometric theory of ordinary differential equations. An essential insight is that when coherent structures exist in a flow, Lagrangian regime boundaries may often be indicated as material curves on which the Lagrangian-mean principal-axis strain is large. This insight is the foundation of many numerical techniques for identifying such features in complex observed or numerically simulated ocean flows. The basic theoretical ideas are illustrated with a simple, kinematic traveling-wave model. The corresponding numerical algorithms for identifying candidate Lagrangian regime boundaries and lines of principal Lagrangian strain (also called Lagrangian coherent structures) are divided into parcel and bundle schemes; the latter include the finite-time and finite-size Lyapunov exponent/Lagrangian strain (FTLE/FTLS and FSLE/FSLS) metrics. Some aspects and results of oceanographic studies based on these approaches are reviewed, and the results are discussed in the context of oceanographic observations of dynamical coherent structures.

  7. Multimodal visualization interface for data management, self-learning and data presentation.

    PubMed

    Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M

    2006-10-01

    A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.

  8. Piezothermal effect in a spinning gas

    DOE PAGES

    Geyko, V. I.; Fisch, N. J.

    2016-10-13

    A spinning gas, heated adiabatically through axial compression, is known to exhibit a rotation-dependent heat capacity. However, as equilibrium is approached, an effect is identified here wherein the temperature does not grow homogeneously in the radial direction, but develops a temperature differential with the hottest region on axis, at the maximum of the centrifugal potential energy. This phenomenon, which we call a piezothermal effect, is shown to grow bilinearly with the compression rate and the amplitude of the potential. As a result, numerical simulations confirm a simple model of this effect, which can be generalized to other forms of potentialmore » energy and methods of heating.« less

  9. On the tumbling toast problem

    NASA Astrophysics Data System (ADS)

    Borghi, Riccardo

    2012-09-01

    A didactical revisitation of the so-called tumbling toast problem is presented here. The numerical solution of the related Newton's equations has been found in the space domain, without resorting to the complete time-based law of motion, with a considerable reduction of the mathematical complexity of the problem. This could allow the effect of the different physical mechanisms ruling the overall dynamics to be appreciated in a more transparent way, even by undergraduates. Moreover, the availability from the literature of experimental investigations carried out on tumbling toast allows us to propose different theoretical models of growing complexity in order to show the corresponding improvement of the agreement between theory and observation.

  10. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    PubMed

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  11. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  12. SIM_EXPLORE: Software for Directed Exploration of Complex Systems

    NASA Technical Reports Server (NTRS)

    Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.

    2013-01-01

    Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.

  13. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing intermediate-depth seismicity within the so-called Vrancea zone. The CFD code for numerical modelling is CitcomS, a widely employed open source package specifically developed for earth sciences. Several preliminary 3D geodynamic models for simulating an assumed subduction or the effect of a mantle plume will be presented and discussed.

  14. Large Black Holes in the Randall-Sundrum II Model

    NASA Astrophysics Data System (ADS)

    Yaghoobpour Tari, Shima

    The Einstein equation with a negative cosmological constant ! in the five dimensions for the Randall-Sundrum II model, which includes a black hole, has been solved numerically. We have constructed an AdS5-CFT 4 solution numerically, using a spectral method to minimize the integral of the square of the error of the Einstein equation, with 210 parameters to be determined by optimization. This metric is conformal to the Schwarzschild metric at an AdS5 boundary with an infinite scale factor. So, we consider this solution as an infinite-mass black hole solution. We have rewritten the infinite-mass black hole in the Fefferman-Graham form and obtained the numerical components of the CFT energy-momentum tensor. Using them, we have perturbed the metric to relocate the brane from infinity and obtained a large static black hole solution for the Randall- Sundrum II model. The changes of mass, entropy, temperature and area of the large black hole from the Schwarzschild metric are studied up to the first order for the perturbation parameter 1/(-Λ5M 2). The Hawking temperature and entropy for our large black hole have the same values as the Schwarzschild metric with the same mass, but the horizon area is increased by about 4.7/(-Λ5). Figueras, Lucietti, and Wiseman found an AdS5-CFT4 solution using an independent and different method from us, called the Ricci-DeTurck-flow method. Then, Figueras and Wiseman perturbed this solution in a same way as we have done and obtained the solution for the large black hole in the Randall-Sundrum II model. These two numerical solutions are the first mathematical proofs for having a large black hole in the Randall-Sundrum II. We have compared their results with ours for the CFT energy-momentum tensor components and the perturbed metric. We have shown that the results are closely in agreement, which can be considered as evidence that the solution for the large black hole in the Randall-Sundrum II model exists.

  15. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  16. Mesoscale Computational Investigation of Shocked Heterogeneous Materials with Application to Large Impact Craters

    NASA Technical Reports Server (NTRS)

    Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.

    2003-01-01

    The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.

  17. Experimental and numerical research on the aerodynamics of unsteady moving aircraft

    NASA Astrophysics Data System (ADS)

    Bergmann, Andreas; Huebner, Andreas; Loeser, Thomas

    2008-02-01

    For the experimental determination of the dynamic wind tunnel data, a new combined motion test capability was developed at the German-Dutch Wind Tunnels DNW for their 3 m Low Speed Wind Tunnel NWB in Braunschweig, Germany, using a unique six degree-of-freedom test rig called ‘Model Positioning Mechanism’ (MPM) as an improved successor to the older systems. With that cutting-edge device, several transport aircraft configurations including a blended wing body configuration were tested in different modes of oscillatory motions roll, pitch and yaw as well as delta-wing geometries like X-31 equipped with remote controlled rudders and flaps to be able to simulate realistic flight maneuvers, e.g., a Dutch Roll. This paper describes the motivation behind these tests and the test setup and in addition gives a short introduction into time accurate maneuver-testing capabilities incorporating models with remote controlled control surfaces. Furthermore, the adaptation of numerical methods for the prediction of dynamic derivatives is described and some examples with the DLR-F12 configuration will be given. The calculations are based on RANS-solution using the finite volume parallel solution algorithm with an unstructured discretization concept (DLR TAU-code).

  18. Numerical prediction of flow induced fibers orientation in injection molded polymer composites

    NASA Astrophysics Data System (ADS)

    Oumer, A. N.; Hamidi, N. M.; Mat Sahat, I.

    2015-12-01

    Since the filling stage of injection molding process has important effect on the determination of the orientation state of the fibers, accurate analysis of the flow field for the mold filling stage becomes a necessity. The aim of the paper is to characterize the flow induced orientation state of short fibers in injection molding cavities. A dog-bone shaped model is considered for the simulation and experiment. The numerical model for determination of the fibers orientation during mold-filling stage of injection molding process was solved using Computational Fluid Dynamics (CFD) software called MoldFlow. Both the simulation and experimental results showed that two different regions (or three layers of orientation structures) across the thickness of the specimen could be found: a shell region which is near to the mold cavity wall, and a core region at the middle of the cross section. The simulation results support the experimental observations that for thin plates the probability of fiber alignment to the flow direction near the mold cavity walls is high but low at the core region. It is apparent that the results of this study could assist in decisions regarding short fiber reinforced polymer composites.

  19. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  20. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  1. Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems.

    PubMed

    Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

    2016-07-26

    This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.

  2. Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems

    PubMed Central

    Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

    2016-01-01

    This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336

  3. Border-Crossing Model for the Diffusive Coarsening of Wet Foams

    NASA Astrophysics Data System (ADS)

    Durian, Douglas; Schimming, Cody

    For dry foams, the transport of gas from small high-pressure bubbles to large low-pressure bubbles is dominated by diffusion across the thin soap films separating neighboring bubbles. For wetter foams, the film areas become smaller as the Plateau borders and vertices inflate with liquid. So-called ``border-blocking'' models can explain some features of wet-foam coarsening based on the presumption that the inflated borders totally block the gas flux; however, this approximation dramatically fails in the wet/unjamming limit where the bubbles become close-packed spheres. Here, we account for the ever-present border-crossing flux by a new length scale defined by the average gradient of gas concentration inside the borders. We argue that it is proportional to the geometric average of film and border thicknesses, and we verify this scaling and the numerical prefactor by numerical solution of the diffusion equation. Then we show how the dA / dt =K0 (n - 6) von Neumann law is modified by the appearance of terms that depend on bubble size and shape as well as the concentration gradient length scale. Finally, we use the modified von Neumann law to compute the growth rate of the average bubble, which is not constant.

  4. Comment on "Heat transfer and fluid flow in microchannels and nanochannels at high Knudsen number using thermal lattice-Boltzmann method".

    PubMed

    Luo, Li-Shi

    2011-10-01

    In this Comment we reveal the falsehood of the claim that the lattice Bhatnagar-Gross-Krook (BGK) model "is capable of modeling shear-driven, pressure-driven, and mixed shear-pressure-driven rarified [sic] flows and heat transfer up to Kn=1 in the transitional regime" made in a recent paper [Ghazanfarian and Abbassi, Phys. Rev. E 82, 026307 (2010)]. In particular, we demonstrate that the so-called "Knudsen effects" described are merely numerical artifacts of the lattice BGK model and they are unphysical. Specifically, we show that the erroneous results for the pressure-driven flow in a microchannel imply the false and unphysical condition that 6σKn<-1, where Kn is the Knudsen number σ=(2-σ(v))/σ(v) and σ(v)∈(0,1] is the tangential momentum accommodation coefficient. We also show explicitly that the defects of the lattice BGK model can be completely removed by using the multiple-relaxation-time collision model.

  5. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  6. A Neurophysiologically Plausible Population Code Model for Feature Integration Explains Visual Crowding

    PubMed Central

    van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.

    2010-01-01

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499

  7. Analytical and numerical construction of equivalent cables.

    PubMed

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  8. Characterisation of the n-colour printing process using the spot colour overprint model.

    PubMed

    Deshpande, Kiran; Green, Phil; Pointer, Michael R

    2014-12-29

    This paper is aimed at reproducing the solid spot colours using the n-colour separation. A simplified numerical method, called as the spot colour overprint (SCOP) model, was used for characterising the n-colour printing process. This model was originally developed for estimating the spot colour overprints. It was extended to be used as a generic forward characterisation model for the n-colour printing process. The inverse printer model based on the look-up table was implemented to obtain the colour separation for n-colour printing process. Finally the real-world spot colours were reproduced using 7-colour separation on lithographic offset printing process. The colours printed with 7 inks were compared against the original spot colours to evaluate the accuracy. The results show good accuracy with the mean CIEDE2000 value between the target colours and the printed colours of 2.06. The proposed method can be used successfully to reproduce the spot colours, which can potentially save significant time and cost in the printing and packaging industry.

  9. Neural computing for numeric-to-symbolic conversion in control systems

    NASA Technical Reports Server (NTRS)

    Passino, Kevin M.; Sartori, Michael A.; Antsaklis, Panos J.

    1989-01-01

    A type of neural network, the multilayer perceptron, is used to classify numeric data and assign appropriate symbols to various classes. This numeric-to-symbolic conversion results in a type of information extraction, which is similar to what is called data reduction in pattern recognition. The use of the neural network as a numeric-to-symbolic converter is introduced, its application in autonomous control is discussed, and several applications are studied. The perceptron is used as a numeric-to-symbolic converter for a discrete-event system controller supervising a continuous variable dynamic system. It is also shown how the perceptron can implement fault trees, which provide useful information (alarms) in a biological system and information for failure diagnosis and control purposes in an aircraft example.

  10. Analysis of forced convective modified Burgers liquid flow considering Cattaneo-Christov double diffusion

    NASA Astrophysics Data System (ADS)

    Waqas, M.; Hayat, T.; Shehzad, S. A.; Alsaedi, A.

    2018-03-01

    A mathematical model is formulated to characterize the non-Fourier and Fick's double diffusive models of heat and mass in moving flow of modified Burger's liquid. Temperature-dependent conductivity of liquid is taken into account. The concept of stratification is utilized to govern the equations of energy and mass species. The idea of boundary layer theory is employed to obtain the mathematical model of considered physical problem. The obtained partial differential system is converted into ordinary ones with the help of relevant variables. The homotopic concept lead to the convergent solutions of governing expressions. Convergence is attained and acceptable values are certified by expressing the so called ℏ -curves and numerical benchmark. Several graphs are made for different values of physical constraints to explore the mechanism of heat and mass transportation. We explored that the liquid temperature and concentration are retard for the larger thermal/concentration relaxation time constraint.

  11. Universality in volume-law entanglement of scrambled pure quantum states.

    PubMed

    Nakagawa, Yuya O; Watanabe, Masataka; Fujita, Hiroyuki; Sugiura, Sho

    2018-04-24

    A pure quantum state can fully describe thermal equilibrium as long as one focuses on local observables. The thermodynamic entropy can also be recovered as the entanglement entropy of small subsystems. When the size of the subsystem increases, however, quantum correlations break the correspondence and mandate a correction to this simple volume law. The elucidation of the size dependence of the entanglement entropy is thus essentially important in linking quantum physics with thermodynamics. Here we derive an analytic formula of the entanglement entropy for a class of pure states called cTPQ states representing equilibrium. We numerically find that our formula applies universally to any sufficiently scrambled pure state representing thermal equilibrium, i.e., energy eigenstates of non-integrable models and states after quantum quenches. Our formula is exploited as diagnostics for chaotic systems; it can distinguish integrable models from non-integrable models and many-body localization phases from chaotic phases.

  12. Electrical characterization and modelization of CaCu3Ti4O12 polycrystalline ceramics

    NASA Astrophysics Data System (ADS)

    Cheballah, Chafe; Valdez-Nava, Zarel; Laudebat, Lionel; Guillemet-Fritsch, Sophie; Lebey, Thierry

    2015-06-01

    Since the observation almost 15 years ago of the so-called "colossal" dielectric permittivity behavior in CaCu3Ti4O12 (CCTO) ceramics, several works have been undertaken to understand its physical origin interfacial polarization being the most likelihood. In this paper, (C-V) measurements, commonly used on semiconducting materials have been used to characterize CCTO samples. Their results may be described by a head-to-tail double metal-insulating-semiconductor (MIS) structure. A comparison between experimental and numerical simulation results of such a structure shows a good agreement, whatever the frequency range. Furthermore, this model explains the non-symmetrical behavior of the electrical response of this material, a property still not taken into account by today's commonly known models. Contribution to the topical issue "Electrical Engineering Symposium (SGE 2014) - Elected submissions", edited by Adel Razek

  13. An Affect-Centered Model of the Psyche and its Consequences for a New Understanding of Nonlinear Psychodynamics

    NASA Astrophysics Data System (ADS)

    Ciompi, Luc

    At variance with a purely cognitivistic approach, an affect-centered model of mental functioning called `fractal affect-logic' is presented on the basis of current emotional-psychological and neurobiological research. Functionally integrated feeling-thinking-behaving programs generated by action appear in this model as the basic `building blocks' of the psyche. Affects are understood as the essential source of energy that mobilises and organises both linear and nonlinear affective-cognitive dynamics, under the influence of appropriate control parameters and order parameters. Global patterns of affective-cognitive functioning form dissipative structures in the sense of Prigogine, with affect-specific attractors and repulsors, bifurcations, high sensitivity for initial conditions and a fractal overall structure that may be represented in a complex potential landscape of variable configuration. This concept opens new possibilities of understanding normal and pathological psychodynamics and sociodynamics, with numerous practical and theoretical implications.

  14. MODFLOW-2000, The U.S. Geological Survey Modular Ground-Water Model - User Guide to Modularization Concepts and the Ground-Water Flow Process

    USGS Publications Warehouse

    Harbaugh, Arlen W.; Banta, Edward R.; Hill, Mary C.; McDonald, Michael G.

    2000-01-01

    MODFLOW is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium by using a finite-difference method. Although MODFLOW was designed to be easily enhanced, the design was oriented toward additions to the ground-water flow equation. Frequently there is a need to solve additional equations; for example, transport equations and equations for estimating parameter values that produce the closest match between model-calculated heads and flows and measured values. This report documents a new version of MODFLOW, called MODFLOW-2000, which is designed to accommodate the solution of equations in addition to the ground-water flow equation. This report is a user's manual. It contains an overview of the old and added design concepts, documents one new package, and contains input instructions for using the model to solve the ground-water flow equation.

  15. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    PubMed Central

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  16. Modelling proteins’ hidden conformations to predict antibiotic resistance

    PubMed Central

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-01-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258

  17. A bi-objective model for robust yard allocation scheduling for outbound containers

    NASA Astrophysics Data System (ADS)

    Liu, Changchun; Zhang, Canrong; Zheng, Li

    2017-01-01

    This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.

  18. Evolutionary fuzzy modeling human diagnostic decisions.

    PubMed

    Peña-Reyes, Carlos Andrés

    2004-05-01

    Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.

  19. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  20. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  1. Evaluating the Impact of Aerosols on Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio

    2015-04-01

    The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.

  2. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  3. Dynamic option pricing with endogenous stochastic arbitrage

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Montalva, Rodrigo; Pellicer, Rely; Villena, Marcelo

    2010-09-01

    Only few efforts have been made in order to relax one of the key assumptions of the Black-Scholes model: the no-arbitrage assumption. This is despite the fact that arbitrage processes usually exist in the real world, even though they tend to be short-lived. The purpose of this paper is to develop an option pricing model with endogenous stochastic arbitrage, capable of modelling in a general fashion any future and underlying asset that deviate itself from its market equilibrium. Thus, this investigation calibrates empirically the arbitrage on the futures on the S&P 500 index using transaction data from September 1997 to June 2009, from here a specific type of arbitrage called “arbitrage bubble”, based on a t-step function, is identified and hence used in our model. The theoretical results obtained for Binary and European call options, for this kind of arbitrage, show that an investment strategy that takes advantage of the identified arbitrage possibility can be defined, whenever it is possible to anticipate in relative terms the amplitude and timespan of the process. Finally, the new trajectory of the stock price is analytically estimated for a specific case of arbitrage and some numerical illustrations are developed. We find that the consequences of a finite and small endogenous arbitrage not only change the trajectory of the asset price during the period when it started, but also after the arbitrage bubble has already gone. In this context, our model will allow us to calibrate the B-S model to that new trajectory even when the arbitrage already started.

  4. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  5. Final Technical Report [Scalable methods for electronic excitations and optical responses of nanostructures: mathematics to algorithms to observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-03-19

    The master project under which this work is funded had as its main objective to develop computational methods for modeling electronic excited-state and optical properties of various nanostructures. The specific goals of the computer science group were primarily to develop effective numerical algorithms in Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TDDFT). There were essentially four distinct stated objectives. The first objective was to study and develop effective numerical algorithms for solving large eigenvalue problems such as those that arise in Density Functional Theory (DFT) methods. The second objective was to explore so-called linear scaling methods ormore » Methods that avoid diagonalization. The third was to develop effective approaches for Time-Dependent DFT (TDDFT). Our fourth and final objective was to examine effective solution strategies for other problems in electronic excitations, such as the GW/Bethe-Salpeter method, and quantum transport problems.« less

  6. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  7. Possible role of the W-Z-top-quark bags in baryogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flambaum, Victor V.; Shuryak, Edward; Department of Physics, State University of New York, Stony Brook, New York 11794

    2010-10-01

    The heaviest members of the standard model--the gauge bosons W, Z and the top quarks and antiquarks--may form collective baglike excitations of the Higgs vacuum provided their number is large enough, at both zero and finite temperatures. Since the Higgs vacuum expectation value is significantly modified inside them, they are called 'bags'. In this work we argue that creation of such objects can explain certain numerical studies of cosmological baryogenesis. Using as an example a hybrid model combining inflationary preheating with cold electroweak transition, we identify 'spots of unbroken phase' found in numerical studies of this scenario with such W-Zmore » bags. We argue that the baryon number violation should happen predominantly inside these objects, and we show that the rates calculated in numerical simulations can be analytically explained using finite-size, pure gauge sphaleron solutions, developed previously in the QCD context by Carter, Ostrovsky, and Shuryak. Furthermore, we point out significant presence of the top quarks/antiquarks in these bags (which were not included in those numerical studies). Although the basic sphaleron exponent remains unchanged by the tops' presence, we find that tops help to stabilize them for a longer time. Another enhancement of the transition rate comes from the 'recycling'' of the tops in the topological transition. Inclusion of the fermions (tops) enhances the sphaleron rate by up to 2 orders of magnitude. Finally, we discuss the magnitude of the CP violation needed to explain the observed baryonic asymmetry of the Universe and give arguments that the difference in the top-antitop population in the bag of the right magnitude can arise both from CP asymmetries in the top decays and in top propagation into the bags, due to the Farrar-Shaposhnikov effect.« less

  8. Numerical investigation of supercritical LNG convective heat transfer in a horizontal serpentine tube

    NASA Astrophysics Data System (ADS)

    Han, Chang-Liang; Ren, Jing-Jie; Dong, Wen-Ping; Bi, Ming-Shu

    2016-09-01

    The submerged combustion vaporizer (SCV) is indispensable general equipment for liquefied natural gas (LNG) receiving terminals. In this paper, numerical simulation was conducted to get insight into the flow and heat transfer characteristics of supercritical LNG on the tube-side of SCV. The SST model with enhanced wall treatment method was utilized to handle the coupled wall-to-LNG heat transfer. The thermal-physical properties of LNG under supercritical pressure were used for this study. After the validation of model and method, the effects of mass flux, outer wall temperature and inlet pressure on the heat transfer behaviors were discussed in detail. Then the non-uniformity heat transfer mechanism of supercritical LNG and effect of natural convection due to buoyancy change in the tube was discussed based on the numerical results. Moreover, different flow and heat transfer characteristics inside the bend tube sections were also analyzed. The obtained numerical results showed that the local surface heat transfer coefficient attained its peak value when the bulk LNG temperature approached the so-called pseudo-critical temperature. Higher mass flux could eliminate the heat transfer deteriorations due to the increase of turbulent diffusion. An increase of outer wall temperature had a significant influence on diminishing heat transfer ability of LNG. The maximum surface heat transfer coefficient strongly depended on inlet pressure. Bend tube sections could enhance the heat transfer due to secondary flow phenomenon. Furthermore, based on the current simulation results, a new dimensionless, semi-theoretical empirical correlation was developed for supercritical LNG convective heat transfer in a horizontal serpentine tube. The paper provided the mechanism of heat transfer for the design of high-efficiency SCV.

  9. Quasi-static finite element modeling of seismic attenuation and dispersion due to wave-induced fluid flow in poroelastic media

    NASA Astrophysics Data System (ADS)

    Quintal, Beatriz; Steeb, Holger; Frehner, Marcel; Schmalholz, Stefan M.

    2011-01-01

    The finite element method is used to solve Biot's equations of consolidation in the displacement-pressure (u - p) formulation. We compute one-dimensional (1-D) and two-dimensional (2-D) numerical quasi-static creep tests with poroelastic media exhibiting mesoscopic-scale heterogeneities to calculate the complex and frequency-dependent P wave moduli from the modeled stress-strain relations. The P wave modulus is used to calculate the frequency-dependent attenuation (i.e., inverse of quality factor) and phase velocity of the medium. Attenuation and velocity dispersion are due to fluid flow induced by pressure differences between regions of different compressibilities, e.g., regions (or patches) saturated with different fluids (i.e., so-called patchy saturation). Comparison of our numerical results with analytical solutions demonstrates the accuracy and stability of the algorithm for a wide range of frequencies (six orders of magnitude). The algorithm employs variable time stepping and an unstructured mesh which make it efficient and accurate for 2-D simulations in media with heterogeneities of arbitrary geometries (e.g., curved shapes). We further numerically calculate the quality factor and phase velocity for 1-D layered patchy saturated porous media exhibiting random distributions of patch sizes. We show that the numerical results for the random distributions can be approximated using a volume average of White's analytical solution and the proposed averaging method is, therefore, suitable for a fast and transparent prediction of both quality factor and phase velocity. Application of our results to frequency-dependent reflection coefficients of hydrocarbon reservoirs indicates that attenuation due to wave-induced flow can increase the reflection coefficient at low frequencies, as is observed at some reservoirs.

  10. Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations

    NASA Astrophysics Data System (ADS)

    Satoh, M.; Matsuno, T.; Tomita, H.; Miura, H.; Nasuno, T.; Iga, S.

    2008-03-01

    A new type of ultra-high resolution atmospheric global circulation model is developed. The new model is designed to perform "cloud resolving simulations" by directly calculating deep convection and meso-scale circulations, which play key roles not only in the tropical circulations but in the global circulations of the atmosphere. Since cores of deep convection have a few km in horizontal size, they have not directly been resolved by existing atmospheric general circulation models (AGCMs). In order to drastically enhance horizontal resolution, a new framework of a global atmospheric model is required; we adopted nonhydrostatic governing equations and icosahedral grids to the new model, and call it Nonhydrostatic ICosahedral Atmospheric Model (NICAM). In this article, we review governing equations and numerical techniques employed, and present the results from the unique 3.5-km mesh global experiments—with O(10 9) computational nodes—using realistic topography and land/ocean surface thermal forcing. The results show realistic behaviors of multi-scale convective systems in the tropics, which have not been captured by AGCMs. We also argue future perspective of the roles of the new model in the next generation atmospheric sciences.

  11. Two modelling approaches to water-quality simulation in a flooded iron-ore mine (Saizerais, Lorraine, France): a semi-distributed chemical reactor model and a physically based distributed reactive transport pipe network model.

    PubMed

    Hamm, V; Collon-Drouaillet, P; Fabriol, R

    2008-02-19

    The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more detailed information on flow and chemical behaviour (dissolved sulphate concentrations, remaining mass of solid sulphate) in the network. Nevertheless, both modelling methods require hydrological and chemical parameters (recharge flow rate, outflows, volume of mine voids, mass of solids, kinetic constants of the dissolution-precipitation reactions), which are commonly not available for a mine and therefore call for calibration data.

  12. Applying super-droplets as a compact representation of warm-rain microphysics for aerosol-cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Arabas, S.; Jaruga, A.; Pawlowska, H.; Grabowski, W. W.

    2012-12-01

    Clouds may influence aerosol characteristics of their environment. The relevant processes include wet deposition (rainout or washout) and cloud condensation nuclei (CCN) recycling through evaporation of cloud droplets and drizzle drops. Recycled CCN physicochemical properties may be altered if the evaporated droplets go through collisional growth or irreversible chemical reactions (e.g. SO2 oxidation). The key challenge of representing these processes in a numerical cloud model stems from the need to track properties of activated CCN throughout the cloud lifecycle. Lack of such "memory" characterises the so-called bulk, multi-moment as well as bin representations of cloud microphysics. In this study we apply the particle-based scheme of Shima et al. 2009. Each modelled particle (aka super-droplet) is a numerical proxy for a multiplicity of real-world CCN, cloud, drizzle or rain particles of the same size, nucleus type,and position. Tracking cloud nucleus properties is an inherent feature of the particle-based frameworks, making them suitable for studying aerosol-cloud-aerosol interactions. The super-droplet scheme is furthermore characterized by linear scalability in the number of computational particles, and no numerical diffusion in the condensational and in the Monte-Carlo type collisional growth schemes. The presentation will focus on processing of aerosol by a drizzling stratocumulus deck. The simulations are carried out using a 2D kinematic framework and a VOCALS experiment inspired set-up (see http://www.rap.ucar.edu/~gthompsn/workshop2012/case1/).

  13. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    NASA Astrophysics Data System (ADS)

    Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der

    2018-04-01

    Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).

  14. Co-existence and switching between fast and Ω-slow wind solutions in rapidly rotating massive stars

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.; ud-Doula, A.; Santillán, A.; Cidale, L.

    2018-06-01

    Most radiation-driven winds of massive stars can be modelled with m-CAK theory, resulting in the so-called fast solution. However, the most rapidly rotating stars among them, especially when the rotational speed is higher than {˜ } 75 per cent of the critical rotational speed, can adopt a different solution, the so-called Ω-slow solution, characterized by a dense and slow wind. Here, we study the transition region of the solutions where the fast solution changes to the Ω-slow solution. Using both time-steady and time-dependent numerical codes, we study this transition region for various equatorial models of B-type stars. In all cases, in a certain range of rotational speeds we find a region where the fast and the Ω-slow solution can co-exist. We find that the type of solution obtained in this co-existence region depends stongly on the initial conditions of our models. We also test the stability of the solutions within the co-existence region by performing base-density perturbations in the wind. We find that under certain conditions, the fast solution can switch to the Ω-slow solution, or vice versa. Such solution-switching may be a possible contributor of material injected into the circumstellar environment of Be stars, without requiring rotational speeds near critical values.

  15. A dynamic magnetic tension force as the cause of failed solar eruptions

    DOE Data Explorer

    Myers, Clayton E. [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); ] (ORCID:0000000345398406); Yamada, Maasaki [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000349961649); Ji, Hantao [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Laboratory for Space Environment and Physical Sciences, Harbin Institute of Technology, Harbin, Heilongjiang 150001, China] (ORCID:0000000196009963); Yoo, Jongsoo [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000338811995); Fox, William [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:000000016289858X); Jara-Almonte, Jonathan [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); ] (ORCID:0000000307606198); Savcheva, Antonia [Harvard†“ Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138, USA] (ORCID:000000025598046X); DeLuca, Edward E. [Harvard†“ Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138, USA] (ORCID:0000000174162895)

    2015-12-11

    Coronal mass ejections are solar eruptions driven by a sudden release of magnetic energy stored in the Sun’s corona. In many cases, this magnetic energy is stored in long-lived, arched structures called magnetic flux ropes. When a flux rope destabilizes, it can either erupt and produce a coronal mass ejection or fail and collapse back towards the Sun. The prevailing belief is that the outcome of a given event is determined by a magnetohydrodynamic force imbalance called the torus instability. This belief is challenged, however, by observations indicating that torus-unstable flux ropes sometimes fail to erupt. This contradiction has not yet been resolved because of a lack of coronal magnetic field measurements and the limitations of idealized numerical modelling. Here we report the results of a laboratory experiment that reveal a previously unknown eruption criterion below which torus-unstable flux ropes fail to erupt. We find that such ‘failed torus’ events occur when the guide magnetic field (that is, the ambient field that runs toroidally along the flux rope) is strong enough to prevent the flux rope from kinking. Under these conditions, the guide field interacts with electric currents in the flux rope to produce a dynamic toroidal field tension force that halts the eruption. This magnetic tension force is missing from existing eruption models, which is why such models cannot explain or predict failed torus events.

  16. Exact solutions of a hierarchy of mixing speeds models

    NASA Astrophysics Data System (ADS)

    Cornille, H.; Platkowski, T.

    1992-07-01

    This paper presents several new aspects of discrete kinetic theory (DKT). First a hierarchy of d-dimensional (d=1,2,3) models is proposed with (2d+3) velocities and three moduli speeds: 0, 2, and a third one that can be arbitrary. It is assumed that the particles at rest have an internal energy which, for microscopic collisions, supplies for the loss of the kinetic energy. In a more general way than usual, collisions are allowed that mix particles with different speeds. Second, for the (1+1)-dimensional restriction of the systems of PDE for these models which have two independent quadratic collision terms we construct different exact solutions. The usual types of exact solutions are studied: periodic solutions and shock wave solutions obtained from the standard linearization of the scalar Riccati equations called Riccatian shock waves. Then other types of solutions of the coupled Riccati equations are found called non-Riccatian shock waves and they are compared with the previous ones. The main new result is that, between the upstream and downstream states, these new solutions are not necessarily monotonous. Further, for the shock problem, a two-dimensional dynamical system of ODE is solved numerically with limit values corresponding to the upstream and downstream states. As a by-product of this study two new linearizations for the Riccati coupled equations with two functions are proposed.

  17. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  18. CO2 migration in the vadose zone: experimental and numerical modelling of controlled gas injection

    NASA Astrophysics Data System (ADS)

    gasparini, andrea; credoz, anthony; grandia, fidel; garcia, david angel; bruno, jordi

    2014-05-01

    The mobility of CO2 in the vadose zone and its subsequent transfer to the atmosphere is a matter of concern in the risk assessment of the geological storage of CO2. In this study the experimental and modelling results of controlled CO2 injection are reported to better understanding of the physical processes affecting CO2 and transport in the vadose zone. CO2 was injected through 16 micro-injectors during 49 days of experiments in a 35 m3 experimental unit filled with sandy material, in the PISCO2 facilities at the ES.CO2 centre in Ponferrada (North Spain). Surface CO2 flux were monitored and mapped periodically to assess the evolution of CO2 migration through the soil and to the atmosphere. Numerical simulations were run to reproduce the experimental results, using TOUGH2 code with EOS7CA research module considering two phases (gas and liquid) and three components (H2O, CO2, air). Five numerical models were developed following step by step the injection procedure done at PISCO2. The reference case (Model A) simulates the injection into a homogeneous soil(homogeneous distribution of permeability and porosity in the near-surface area, 0.8 to 0.3 m deep from the atmosphere). In another model (Model B), four additional soil layers with four specific permeabilities and porosities were included to predict the effect of differential compaction on soil. To account for the effect of higher soil temperature, an isothermal simulation called Model C was also performed. Finally, the assessment of the rainfall effects (soil water saturation) on CO2 emission on surface was performed in models called Model D and E. The combined experimental and modelling approach shows that CO2 leakage in the vadose zone quickly comes out through preferential migration pathways and spots with the ranges of fluxes in the ground/surface interface from 2.5 to 600 g·m-2·day-1. This gas channelling is mainly related to soil compaction and climatic perturbation. This has significant implications to design adapted detection and monitoring strategies of early leakage in commercial CO2 storage. The presence of soils with different compactions at surface influences the CO2 dispersion. The inclusion of soils with different permeability, porosity and liquid saturation results in preferential pathways. The formation of preferential pathways in the soil and hot spots on the surface has commonly been observed in natural systems where deep CO2 fluxes interact with shallow aquifers. Increase of ambient temperature increases CO2 fluxes intensity whereas rainfall decreases CO2 emission in gas phase and trap it as aqueous species in the porous media of the soil. A good accuracy has been obtained for surface CO2 fluxes location and intensity between experimental and modelling results taking into account the selected equation of state, the soil characteristics and the operational conditions. Phenomena of compaction and preferential pathways located only in the first centimetres of the soil can explain the heterogeneity of CO2 fluxes in the 16 m2 surface area of PISCO2 experimental platform.

  19. CONDIF - A modified central-difference scheme for convective flows

    NASA Technical Reports Server (NTRS)

    Runchal, Akshai K.

    1987-01-01

    The paper presents a method, called CONDIF, which modifies the CDS (central-difference scheme) by introducing a controlled amount of numerical diffusion based on the local gradients. The numerical diffusion can be adjusted to be negligibly low for most problems. CONDIF results are significantly more accurate than those obtained from the hybrid scheme when the Peclet number is very high and the flow is at large angles to the grid.

  20. Riemann solvers and Alfven waves in black hole magnetospheres

    NASA Astrophysics Data System (ADS)

    Punsly, Brian; Balsara, Dinshaw; Kim, Jinho; Garain, Sudip

    2016-09-01

    In the magnetosphere of a rotating black hole, an inner Alfven critical surface (IACS) must be crossed by inflowing plasma. Inside the IACS, Alfven waves are inward directed toward the black hole. The majority of the proper volume of the active region of spacetime (the ergosphere) is inside of the IACS. The charge and the totally transverse momentum flux (the momentum flux transverse to both the wave normal and the unperturbed magnetic field) are both determined exclusively by the Alfven polarization. Thus, it is important for numerical simulations of black hole magnetospheres to minimize the dissipation of Alfven waves. Elements of the dissipated wave emerge in adjacent cells regardless of the IACS, there is no mechanism to prevent Alfvenic information from crossing outward. Thus, numerical dissipation can affect how simulated magnetospheres attain the substantial Goldreich-Julian charge density associated with the rotating magnetic field. In order to help minimize dissipation of Alfven waves in relativistic numerical simulations we have formulated a one-dimensional Riemann solver, called HLLI, which incorporates the Alfven discontinuity and the contact discontinuity. We have also formulated a multidimensional Riemann solver, called MuSIC, that enables low dissipation propagation of Alfven waves in multiple dimensions. The importance of higher order schemes in lowering the numerical dissipation of Alfven waves is also catalogued.

Top