GIS-MODFLOW: Ein kleines OpenSource-Werkzeug zur Anbindung von GIS-Daten an MODFLOW
NASA Astrophysics Data System (ADS)
Gossel, Wolfgang
2013-06-01
The numerical model MODFLOW (Harbaugh 2005) is an efficient and up-to-date tool for groundwater flow modelling. On the other hand, Geo-Information-Systems (GIS) provide useful tools for data preparation and visualization that can also be incorporated in numerical groundwater modelling. An interface between both would therefore be useful for many hydrogeological investigations. To date, several integrated stand-alone tools have been developed that rely on MODFLOW, MODPATH and transport modelling tools. Simultaneously, several open source-GIS codes were developed to improve functionality and ease of use. These GIS tools can be used as pre- and post-processors of the numerical model MODFLOW via a suitable interface. Here we present GIS-MODFLOW as an open-source tool that provides a new universal interface by using the ESRI ASCII GRID data format that can be converted into MODFLOW input data. This tool can also treat MODFLOW results. Such a combination of MODFLOW and open-source GIS opens new possibilities to render groundwater flow modelling, and simulation results, available to larger circles of hydrogeologists.
Numerical modelling of tool wear in turning with cemented carbide cutting tools
NASA Astrophysics Data System (ADS)
Franco, P.; Estrems, M.; Faura, F.
2007-04-01
A numerical model is proposed for analysing the flank and crater wear resulting from the loss of material on cutting tool surface in turning processes due to wear mechanisms of adhesion, abrasion and fracture. By means of this model, the material loss along cutting tool surface can be analysed, and the worn surface shape during the workpiece machining can be determined. The proposed model analyses the gradual degradation of cutting tool during turning operation, and tool wear can be estimated as a function of cutting time. Wear-land width (VB) and crater depth (KT) can be obtained for description of material loss on cutting tool surface, and the effects of the distinct wear mechanisms on surface shape can be studied. The parameters required for the tool wear model are obtained from bibliography and experimental observation for AISI 4340 steel turning with WC-Co cutting tools.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Which benefits in the use of a modeling platform : The VSoil example.
NASA Astrophysics Data System (ADS)
Lafolie, François; Cousin, Isabelle; Mollier, Alain; Pot, Valérie; Maron, Pierre-Alain; Moitrier, Nicolas; Nouguier, Cedric; Moitrier, Nathalie; Beudez, Nicolas
2015-04-01
In the environmental community the need for coupling the models and the associated knowledges emerged recently. The development of a coupling tool or of a modeling platform is mainly driven by the necessity to create models accounting for multiple processes and to take into account the feed back between these processes. Models focusing on a restricted number of processes exist and thus the coupling of these numerical tools appeared as an efficient and rapid mean to fill up the identified gaps. Several tools have been proposed : OMS3 (David et al. 2013) ; CSDMS framework (Peckham et al. 2013) ; the Open MI project developed within the frame of European Community (Open MI, 2011). However, what we should expect from a modeling platform could be more ambitious than only coupling existing numerical codes. We believe that we need to share easily not only our numerical representations but also the attached knowledges. We need to rapidly and easily develop complex models to have tools to bring responses to current issues on soil functioning and soil evolution within the frame of global change. We also need to share in a common frame our visions of soil functioning at various scales, one the one hand to strengthen our collaborations, and, on the other hand, to make them visible by the other communities working on environmental issues. The presentation will briefly present the VSoil platform. The platform is able to manipulate concepts and numerical representations of these processes. The tool helps in assembling modules to create a model and automatically generates an executable code and a GUI. Potentialities of the tool will be illustrated on few selected cases.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain
NASA Technical Reports Server (NTRS)
Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem
2016-01-01
The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.
Numerical modeling tools for chemical vapor deposition
NASA Technical Reports Server (NTRS)
Jasinski, Thomas J.; Childs, Edward P.
1992-01-01
Development of general numerical simulation tools for chemical vapor deposition (CVD) was the objective of this study. Physical models of important CVD phenomena were developed and implemented into the commercial computational fluid dynamics software FLUENT. The resulting software can address general geometries as well as the most important phenomena occurring with CVD reactors: fluid flow patterns, temperature and chemical species distribution, gas phase and surface deposition. The physical models are documented which are available and examples are provided of CVD simulation capabilities.
ERIC Educational Resources Information Center
Carey, Cayelan C.; Gougis, Rebekka Darner
2017-01-01
Ecosystem modeling is a critically important tool for environmental scientists, yet is rarely taught in undergraduate and graduate classrooms. To address this gap, we developed a teaching module that exposes students to a suite of modeling skills and tools (including computer programming, numerical simulation modeling, and distributed computing)…
NASA Astrophysics Data System (ADS)
Cazzani, Antonio; Malagù, Marcello; Turco, Emilio
2016-03-01
We illustrate a numerical tool for analyzing plane arches such as those frequently used in historical masonry heritage. It is based on a refined elastic mechanical model derived from the isogeometric approach. In particular, geometry and displacements are modeled by means of non-uniform rational B-splines. After a brief introduction, outlining the basic assumptions of this approach and the corresponding modeling choices, several numerical applications to arches, which are typical of masonry structures, show the performance of this novel technique. These are discussed in detail to emphasize the advantage and potential developments of isogeometric analysis in the field of structural analysis of historical masonry buildings with complex geometries.
Analysis of the thermo-mechanical deformations in a hot forging tool by numerical simulation
NASA Astrophysics Data System (ADS)
L-Cancelos, R.; Varas, F.; Martín, E.; Viéitez, I.
2016-03-01
Although programs have been developed for the design of tools for hot forging, its design is still largely based on the experience of the tool maker. This obliges to build some test matrices and correct their errors to minimize distortions in the forged piece. This phase prior to mass production consumes time and material resources, which makes the final product more expensive. The forging tools are usually constituted by various parts made of different grades of steel, which in turn have different mechanical properties and therefore suffer different degrees of strain. Furthermore, the tools used in the hot forging are exposed to a thermal field that also induces strain or stress based on the degree of confinement of the piece. Therefore, the mechanical behaviour of the assembly is determined by the contact between the different pieces. The numerical simulation allows to analyse different configurations and anticipate possible defects before tool making, thus, reducing the costs of this preliminary phase. In order to improve the dimensional quality of the manufactured parts, the work presented here focuses on the application of a numerical model to a hot forging manufacturing process in order to predict the areas of the forging die subjected to large deformations. The thermo-mechanical model developed and implemented with free software (Code-Aster) includes the strains of thermal origin, strains during forge impact and contact effects. The numerical results are validated with experimental measurements in a tooling set that produces forged crankshafts for the automotive industry. The numerical results show good agreement with the experimental tests. Thereby, a very useful tool for the design of tooling sets for hot forging is achieved.
Numerical modelling of orthogonal cutting: application to woodworking with a bench plane.
Nairn, John A
2016-06-06
A numerical model for orthogonal cutting using the material point method was applied to woodcutting using a bench plane. The cutting process was modelled by accounting for surface energy associated with wood fracture toughness for crack growth parallel to the grain. By using damping to deal with dynamic crack propagation and modelling all contact between wood and the plane, simulations could initiate chip formation and proceed into steady-state chip propagation including chip curling. Once steady-state conditions were achieved, the cutting forces became constant and could be determined as a function of various simulation variables. The modelling details included a cutting tool, the tool's rake and grinding angles, a chip breaker, a base plate and a mouth opening between the base plate and the tool. The wood was modelled as an anisotropic elastic-plastic material. The simulations were verified by comparison to an analytical model and then used to conduct virtual experiments on wood planing. The virtual experiments showed interactions between depth of cut, chip breaker location and mouth opening. Additional simulations investigated the role of tool grinding angle, tool sharpness and friction.
NASA Technical Reports Server (NTRS)
Shih, Hsin-Yi; Tien, James S.; Ferkul, Paul (Technical Monitor)
2001-01-01
The recently developed numerical model of concurrent-flow flame spread over thin solids has been used as a simulation tool to help the designs of a space experiment. The two-dimensional and three-dimensional, steady form of the compressible Navier-Stokes equations with chemical reactions are solved. With the coupled multi-dimensional solver of the radiative heat transfer, the model is capable of answering a number of questions regarding the experiment concept and the hardware designs. In this paper, the capabilities of the numerical model are demonstrated by providing the guidance for several experimental designing issues. The test matrix and operating conditions of the experiment are estimated through the modeling results. The three-dimensional calculations are made to simulate the flame-spreading experiment with realistic hardware configuration. The computed detailed flame structures provide the insight to the data collection. In addition, the heating load and the requirements of the product exhaust cleanup for the flow tunnel are estimated with the model. We anticipate that using this simulation tool will enable a more efficient and successful space experiment to be conducted.
USDA-ARS?s Scientific Manuscript database
Although slowly abandoned in developed countries, furrow irrigation systems continue to be a dominant irrigation method in developing countries. Numerical models represent powerful tools to assess irrigation and fertigation efficiency. While several models have been proposed in the past, the develop...
NASA Astrophysics Data System (ADS)
Equeter, Lucas; Ducobu, François; Rivière-Lorphèvre, Edouard; Abouridouane, Mustapha; Klocke, Fritz; Dehombreux, Pierre
2018-05-01
Industrial concerns arise regarding the significant cost of cutting tools in machining process. In particular, their improper replacement policy can lead either to scraps, or to early tool replacements, which would waste fine tools. ISO 3685 provides the flank wear end-of-life criterion. Flank wear is also the nominal type of wear for longest tool lifetimes in optimal cutting conditions. Its consequences include bad surface roughness and dimensional discrepancies. In order to aid the replacement decision process, several tool condition monitoring techniques are suggested. Force signals were shown in the literature to be strongly linked with tools flank wear. It can therefore be assumed that force signals are highly relevant for monitoring the condition of cutting tools and providing decision-aid information in the framework of their maintenance and replacement. The objective of this work is to correlate tools flank wear with numerically computed force signals. The present work uses a Finite Element Model with a Coupled Eulerian-Lagrangian approach. The geometry of the tool is changed for different runs of the model, in order to obtain results that are specific to a certain level of wear. The model is assessed by comparison with experimental data gathered earlier on fresh tools. Using the model at constant cutting parameters, force signals under different tool wear states are computed and provide force signals for each studied tool geometry. These signals are qualitatively compared with relevant data from the literature. At this point, no quantitative comparison could be performed on worn tools because the reviewed literature failed to provide similar studies in this material, either numerical or experimental. Therefore, further development of this work should include experimental campaigns aiming at collecting cutting forces signals and assessing the numerical results that were achieved through this work.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
NASA Technical Reports Server (NTRS)
Davis, Brian; Turner, Travis L.; Seelecke, Stefan
2008-01-01
An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.
On numerical modeling of one-dimensional geothermal histories
Haugerud, R.A.
1989-01-01
Numerical models of one-dimensional geothermal histories are one way of understanding the relations between tectonics and transient thermal structure in the crust. Such models can be powerful tools for interpreting geochronologic and thermobarometric data. A flexible program to calculate these models on a microcomputer is available and examples of its use are presented. Potential problems with this approach include the simplifying assumptions that are made, limitations of the numerical techniques, and the neglect of convective heat transfer. ?? 1989.
Preserving Simplecticity in the Numerical Integration of Linear Beam Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
2017-07-01
Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less
Data access and decision tools for coastal water resources management
US EPA has supported the development of numerous models and tools to support implementation of environmental regulations. However, transfer of knowledge and methods from detailed technical models to support practical problem solving by local communities and watershed or coastal ...
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Numerical model for healthy and injured ankle ligaments.
Forestiero, Antonella; Carniel, Emanuele Luigi; Fontanella, Chiara Giulia; Natali, Arturo Nicola
2017-06-01
The aim of this work is to provide a computational tool for the investigation of ankle mechanics under different loading conditions. The attention is focused on the biomechanical role of ankle ligaments that are fundamental for joints stability. A finite element model of the human foot is developed starting from Computed Tomography and Magnetic Resonance Imaging, using particular attention to the definition of ankle ligaments. A refined fiber-reinforced visco-hyperelastic constitutive model is assumed to characterize the mechanical response of ligaments. Numerical analyses that interpret anterior drawer and the talar tilt tests reported in literature are performed. The numerical results are in agreement with the range of values obtained by experimental tests confirming the accuracy of the procedure adopted. The increase of the ankle range of motion after some ligaments rupture is also evaluated, leading to the capability of the numerical models to interpret the damage conditions. The developed computational model provides a tool for the investigation of foot and ankle functionality in terms of stress-strain of the tissues and in terms of ankle motion, considering different types of damage to ankle ligaments.
NASA Astrophysics Data System (ADS)
Huang, Chien-Jung; White, Susan; Huang, Shao-Ching; Mallya, Sanjay; Eldredge, Jeff
2016-11-01
Obstructive sleep apnea (OSA) is a medical condition characterized by repetitive partial or complete occlusion of the airway during sleep. The soft tissues in the upper airway of OSA patients are prone to collapse under the low pressure loads incurred during breathing. The ultimate goal of this research is the development of a versatile numerical tool for simulation of air-tissue interactions in the patient specific upper airway geometry. This tool is expected to capture several phenomena, including flow-induced vibration (snoring) and large deformations during airway collapse of the complex airway geometry in respiratory flow conditions. Here, we present our ongoing progress toward this goal. To avoid mesh regeneration, for flow model, a sharp-interface embedded boundary method is used on Cartesian grids for resolving the fluid-structure interface, while for the structural model, a cut-cell finite element method is used. Also, to properly resolve large displacements, non-linear elasticity model is used. The fluid and structure solvers are connected with the strongly coupled iterative algorithm. The parallel computation is achieved with the numerical library PETSc. Some two- and three- dimensional preliminary results are shown to demonstrate the ability of this tool.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-01
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-07
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Towards a metadata scheme for the description of materials - the description of microstructures
NASA Astrophysics Data System (ADS)
Schmitz, Georg J.; Böttger, Bernd; Apel, Markus; Eiken, Janin; Laschet, Gottfried; Altenfeld, Ralph; Berger, Ralf; Boussinot, Guillaume; Viardin, Alexandre
2016-01-01
The property of any material is essentially determined by its microstructure. Numerical models are increasingly the focus of modern engineering as helpful tools for tailoring and optimization of custom-designed microstructures by suitable processing and alloy design. A huge variety of software tools is available to predict various microstructural aspects for different materials. In the general frame of an integrated computational materials engineering (ICME) approach, these microstructure models provide the link between models operating at the atomistic or electronic scales, and models operating on the macroscopic scale of the component and its processing. In view of an improved interoperability of all these different tools it is highly desirable to establish a standardized nomenclature and methodology for the exchange of microstructure data. The scope of this article is to provide a comprehensive system of metadata descriptors for the description of a 3D microstructure. The presented descriptors are limited to a mere geometric description of a static microstructure and have to be complemented by further descriptors, e.g. for properties, numerical representations, kinetic data, and others in the future. Further attributes to each descriptor, e.g. on data origin, data uncertainty, and data validity range are being defined in ongoing work. The proposed descriptors are intended to be independent of any specific numerical representation. The descriptors defined in this article may serve as a first basis for standardization and will simplify the data exchange between different numerical models, as well as promote the integration of experimental data into numerical models of microstructures. An HDF5 template data file for a simple, three phase Al-Cu microstructure being based on the defined descriptors complements this article.
Towards a metadata scheme for the description of materials - the description of microstructures.
Schmitz, Georg J; Böttger, Bernd; Apel, Markus; Eiken, Janin; Laschet, Gottfried; Altenfeld, Ralph; Berger, Ralf; Boussinot, Guillaume; Viardin, Alexandre
2016-01-01
The property of any material is essentially determined by its microstructure. Numerical models are increasingly the focus of modern engineering as helpful tools for tailoring and optimization of custom-designed microstructures by suitable processing and alloy design. A huge variety of software tools is available to predict various microstructural aspects for different materials. In the general frame of an integrated computational materials engineering (ICME) approach, these microstructure models provide the link between models operating at the atomistic or electronic scales, and models operating on the macroscopic scale of the component and its processing. In view of an improved interoperability of all these different tools it is highly desirable to establish a standardized nomenclature and methodology for the exchange of microstructure data. The scope of this article is to provide a comprehensive system of metadata descriptors for the description of a 3D microstructure. The presented descriptors are limited to a mere geometric description of a static microstructure and have to be complemented by further descriptors, e.g. for properties, numerical representations, kinetic data, and others in the future. Further attributes to each descriptor, e.g. on data origin, data uncertainty, and data validity range are being defined in ongoing work. The proposed descriptors are intended to be independent of any specific numerical representation. The descriptors defined in this article may serve as a first basis for standardization and will simplify the data exchange between different numerical models, as well as promote the integration of experimental data into numerical models of microstructures. An HDF5 template data file for a simple, three phase Al-Cu microstructure being based on the defined descriptors complements this article.
Modelling Student Misconceptions Using Nested Logit Item Response Models
ERIC Educational Resources Information Center
Yildiz, Mustafa
2017-01-01
Student misconceptions have been studied for decades from a curricular/instructional perspective and from the assessment/test level perspective. Numerous misconception assessment tools have been developed in order to measure students' misconceptions relative to the correct content. Often, these tools are used to make a variety of educational…
Numerical tool for tsunami risk assessment in the southern coast of Dominican Republic
NASA Astrophysics Data System (ADS)
Macias Sanchez, J.; Llorente Isidro, M.; Ortega, S.; Gonzalez Vida, J. M., Sr.; Castro, M. J.
2016-12-01
The southern coast of Dominican Republic is a very populated region, with several important cities including Santo Domingo, its capital. Important activities are rooted in the southern coast including tourism, industry, commercial ports, and, energy facilities, among others. According to historical reports, it has been impacted by big earthquakes accompanied by tsunamis as in Azua in 1751 and recently Pedernales in 2010, but their sources are not clearly identified. The aim of the present work is to develop a numerical tool to simulate the impact in the southern coast of the Dominican Republic of tsunamis generated in the Caribbean Sea. This tool, based on the Tsunami-HySEA model from EDANYA group (University of Malaga, Spain), could be used in the framework of a Tsunami Early Warning Systems due the very short computing times when only propagation is computed or it could be used to assess inundation impact, computing inundation with a initial 5 meter resolution. Numerical results corresponding to three theoretical sources are used to test the numerical tool.
Numerical tension adjustment of x-ray membrane to represent goat skin kompang
NASA Astrophysics Data System (ADS)
Siswanto, Waluyo Adi; Abdullah, Muhammad Syiddiq Bin
2017-04-01
This paper presents a numerical membrane model of traditional musical instrument kompang that will be used to find the parameter of membrane tension of x-ray membrane representing the classical goat-skin membrane of kompang. In this study, the experiment towards the kompang is first conducted in an acoustical anechoic enclosure and in parallel a mathematical model of the kompang membrane is developed to simulate the vibration of the kompang membrane in polar coordinate by implementing Fourier-Bessel wave function. The wave equation in polar direction in mode 0,1 is applied to provide the corresponding natural frequencies of the circular membrane. The value of initial and boundary conditions in the function is determined from experiment to allow the correct development of numerical equation. The numerical mathematical model is coded in SMath for the accurate numerical analysis as well as the plotting tool. Two kompang membrane cases with different membrane materials, i.e. goat skin and x-ray film membranes with fixed radius of 0.1 m are used in the experiment. An alternative of kompang's membrane made of x-ray film with the appropriate tension setting can be used to represent the sound of traditional goat-skin kompang. The tension setting of the membrane to resemble the goat-skin is 24N. An effective numerical tool has been develop to help kompang maker to set the tension of x-ray membrane. In the future application, any tradional kompang with different size can be replaced by another membrane material if the tension is set to the correct tension value. The developed numerical tool is useful and handy to calculate the tension of the alternative membrane material.
Numerical Tension Adjustment of X-Ray Membrane to Represent Goat Skin Kompang
NASA Astrophysics Data System (ADS)
Syiddiq, M.; Siswanto, W. A.
2017-01-01
This paper presents a numerical membrane model of traditional musical instrument kompang that will be used to find the parameter of membrane tension of x-ray membrane representing the classical goat-skin membrane of kompang. In this study, the experiment towards the kompang is first conducted in an acoustical anechoic enclosure and in parallel a mathematical model of the kompang membrane is developed to simulate the vibration of the kompang membrane in polar coordinate by implementing Fourier-Bessel wave function. The wave equation in polar direction in mode 0,1 is applied to provide the corresponding natural frequencies of the circular membrane. The value of initial and boundary conditions in the function is determined from experiment to allow the correct development of numerical equation. The numerical mathematical model is coded in SMath for the accurate numerical analysis as well as the plotting tool. Two kompang membrane cases with different membrane materials, i.e. goat skin and x-ray film membranes with fixed radius of 0.1 m are used in the experiment. An alternative of kompang’s membrane made of x-ray film with the appropriate tension setting can be used to represent the sound of traditional goat-skin kompang. The tension setting of the membrane to resemble the goat-skin is 24N. An effective numerical tool has been used to help kompang maker to set the tension of x-ray membrane. In the future application, any traditional kompang with different size can be replaced by another membrane material if the tension is set to the correct tension value. The numerical tool used is useful and handy to calculate the tension of the alternative membrane material.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
NASA Astrophysics Data System (ADS)
Lafolie, François; Cousin, Isabelle; Mollier, Alain; Pot, Valérie; Moitrier, Nicolas; Balesdent, Jérome; bruckler, Laurent; Moitrier, Nathalie; Nouguier, Cédric; Richard, Guy
2014-05-01
Models describing the soil functioning are valuable tools for addressing challenging issues related to agricultural production, soil protection or biogeochemical cycles. Coupling models that address different scientific fields is actually required in order to develop numerical tools able to simulate the complex interactions and feed-backs occurring within a soil profile in interaction with climate and human activities. We present here a component-based modelling platform named "VSoil", that aims at designing, developing, implementing and coupling numerical representation of biogeochemical and physical processes in soil, from the aggregate to the profile scales. The platform consists of four softwares, i) Vsoil_Processes dedicated to the conceptual description of processes and of their inputs and outputs, ii) Vsoil_Modules devoted to the development of numerical representation of elementary processes as modules, iii) Vsoil_Models which permits the coupling of modules to create models, iv) Vsoil_Player for the run of the model and the primary analysis of results. The platform is designed to be a collaborative tool, helping scientists to share not only their models, but also the scientific knowledge on which the models are built. The platform is based on the idea that processes of any kind can be described and characterized by their inputs (state variables required) and their outputs. The links between the processes are automatically detected by the platform softwares. For any process, several numerical representations (modules) can be developed and made available to platform users. When developing modules, the platform takes care of many aspects of the development task so that the user can focus on numerical calculations. Fortran2008 and C++ are the supported languages and existing codes can be easily incorporated into platform modules. Building a model from available modules simply requires selecting the processes being accounted for and for each process a module. During this task, the platform displays available modules and checks the compatibility between the modules. The model (main program) is automatically created when compatible modules have been selected for all the processes. A GUI is automatically generated to help the user providing parameters and initial situations. Numerical results can be immediately visualized, archived and exported. The platform also provides facilities to carry out sensitivity analysis. Parameters estimation and links with databases are being developed. The platform can be freely downloaded from the web site (http://www.inra.fr/sol_virtuel/) with a set of processes, variables, modules and models. However, it is designed so that any user can add its own components. Theses adds-on can be shared with co-workers by means of an export/import mechanism using the e-mail. The adds-on can also be made available to the whole community of platform users when developers asked for. A filtering tool is available to explore the content of the platform (processes, variables, modules, models).
Utilization of FEM model for steel microstructure determination
NASA Astrophysics Data System (ADS)
Kešner, A.; Chotěborský, R.; Linda, M.; Hromasová, M.
2018-02-01
Agricultural tools which are used in soil processing, they are worn by abrasive wear mechanism cases by hard minerals particles in the soil. The wear rate is influenced by mechanical characterization of tools material and wear rate is influenced also by soil mineral particle contents. Mechanical properties of steel can be affected by a technology of heat treatment that it leads to a different microstructures. Experimental work how to do it is very expensive and thanks to numerical methods like FEM we can assumed microstructure at low cost but each of numerical model is necessary to be verified. The aim of this work has shown a procedure of prediction microstructure of steel for agricultural tools. The material characterizations of 51CrV4 grade steel were used for numerical simulation like TTT diagram, heat capacity, heat conduction and other physical properties of material. A relationship between predicted microstructure by FEM and real microstructure after heat treatment shows a good correlation.
Assessment of wear dependence parameters in complex model of cutting tool wear
NASA Astrophysics Data System (ADS)
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
NASA Astrophysics Data System (ADS)
Davis, Brian; Turner, Travis L.; Seelecke, Stefan
2005-05-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.
NASA Technical Reports Server (NTRS)
Davis, Brian; Turner, Travis L.; Seelecke, Stefan
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.
Energy evaluation of protection effectiveness of anti-vibration gloves.
Hermann, Tomasz; Dobry, Marian Witalis
2017-09-01
This article describes an energy method of assessing protection effectiveness of anti-vibration gloves on the human dynamic structure. The study uses dynamic models of the human and the glove specified in Standard No. ISO 10068:2012. The physical models of human-tool systems were developed by combining human physical models with a power tool model. The combined human-tool models were then transformed into mathematical models from which energy models were finally derived. Comparative energy analysis was conducted in the domain of rms powers. The energy models of the human-tool systems were solved using numerical simulation implemented in the MATLAB/Simulink environment. The simulation procedure demonstrated the effectiveness of the anti-vibration glove as a method of protecting human operators of hand-held power tools against vibration. The desirable effect is achieved by lowering the flow of energy in the human-tool system when the anti-vibration glove is employed.
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review
NASA Astrophysics Data System (ADS)
Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal
2017-08-01
Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams
NASA Technical Reports Server (NTRS)
Davis, Brian A.
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.
Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz
2011-02-01
The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.
Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
An integrated modeling and design tool for advanced optical spacecraft
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1992-01-01
Consideration is given to the design and status of the Integrated Modeling of Optical Systems (IMOS) tool and to critical design issues. A multidisciplinary spacecraft design and analysis tool with support for structural dynamics, controls, thermal analysis, and optics, IMOS provides rapid and accurate end-to-end performance analysis, simulations, and optimization of advanced space-based optical systems. The requirements for IMOS-supported numerical arrays, user defined data structures, and a hierarchical data base are outlined, and initial experience with the tool is summarized. A simulation of a flexible telescope illustrates the integrated nature of the tools.
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
Comparison of BrainTool to other UML modeling and model transformation tools
NASA Astrophysics Data System (ADS)
Nikiforova, Oksana; Gusarovs, Konstantins
2017-07-01
In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.
MI-Sim: A MATLAB package for the numerical analysis of microbial ecological interactions.
Wade, Matthew J; Oakley, Jordan; Harbisher, Sophie; Parker, Nicholas G; Dolfing, Jan
2017-01-01
Food-webs and other classes of ecological network motifs, are a means of describing feeding relationships between consumers and producers in an ecosystem. They have application across scales where they differ only in the underlying characteristics of the organisms and substrates describing the system. Mathematical modelling, using mechanistic approaches to describe the dynamic behaviour and properties of the system through sets of ordinary differential equations, has been used extensively in ecology. Models allow simulation of the dynamics of the various motifs and their numerical analysis provides a greater understanding of the interplay between the system components and their intrinsic properties. We have developed the MI-Sim software for use with MATLAB to allow a rigorous and rapid numerical analysis of several common ecological motifs. MI-Sim contains a series of the most commonly used motifs such as cooperation, competition and predation. It does not require detailed knowledge of mathematical analytical techniques and is offered as a single graphical user interface containing all input and output options. The tools available in the current version of MI-Sim include model simulation, steady-state existence and stability analysis, and basin of attraction analysis. The software includes seven ecological interaction motifs and seven growth function models. Unlike other system analysis tools, MI-Sim is designed as a simple and user-friendly tool specific to ecological population type models, allowing for rapid assessment of their dynamical and behavioural properties.
ERIC Educational Resources Information Center
Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima
2009-01-01
A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…
Holmquist-Johnson, C. L.
2009-01-01
River spanning rock structures are being constructed for water delivery as well as to enable fish passage at barriers and provide or improve the aquatic habitat for endangered fish species. Current design methods are based upon anecdotal information applicable to a narrow range of channel conditions. The complex flow patterns and performance of rock weirs is not well understood. Without accurate understanding of their hydraulics, designers cannot address the failure mechanisms of these structures. Flow characteristics such as jets, near bed velocities, recirculation, eddies, and plunging flow govern scour pool development. These detailed flow patterns can be replicated using a 3D numerical model. Numerical studies inexpensively simulate a large number of cases resulting in an increased range of applicability in order to develop design tools and predictive capability for analysis and design. The analysis and results of the numerical modeling, laboratory modeling, and field data provide a process-based method for understanding how structure geometry affects flow characteristics, scour development, fish passage, water delivery, and overall structure stability. Results of the numerical modeling allow designers to utilize results of the analysis to determine the appropriate geometry for generating desirable flow parameters. The end product of this research will develop tools and guidelines for more robust structure design or retrofits based upon predictable engineering and hydraulic performance criteria. ?? 2009 ASCE.
Using 3-D Numerical Weather Data in Piloted Simulations
NASA Technical Reports Server (NTRS)
Daniels, Taumi S.
2016-01-01
This report describes the process of acquiring and using 3-D numerical model weather data sets in NASA Langley's Research Flight Deck (RFD). A set of software tools implement the process and can be used for other purposes as well. Given time and location information of a weather phenomenon of interest, the user can download associated numerical weather model data. These data are created by the National Oceanic and Atmospheric Administration (NOAA) High Resolution Rapid Refresh (HRRR) model, and are then processed using a set of Mathworks' Matlab(TradeMark) scripts to create the usable 3-D weather data sets. Each data set includes radar re ectivity, water vapor, component winds, temperature, supercooled liquid water, turbulence, pressure, altitude, land elevation, relative humidity, and water phases. An open-source data processing program, wgrib2, is available from NOAA online, and is used along with Matlab scripts. These scripts are described with sucient detail to make future modi cations. These software tools have been used to generate 3-D weather data for various RFD experiments.
Integrated Control Modeling for Propulsion Systems Using NPSS
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Felder, James L.; Lavelle, Thomas M.; Withrow, Colleen A.; Yu, Albert Y.; Lehmann, William V. A.
2004-01-01
The Numerical Propulsion System Simulation (NPSS), an advanced engineering simulation environment used to design and analyze aircraft engines, has been enhanced by integrating control development tools into it. One of these tools is a generic controller interface that allows NPSS to communicate with control development software environments such as MATLAB and EASY5. The other tool is a linear model generator (LMG) that gives NPSS the ability to generate linear, time-invariant state-space models. Integrating these tools into NPSS enables it to be used for control system development. This paper will discuss the development and integration of these tools into NPSS. In addition, it will show a comparison of transient model results of a generic, dual-spool, military-type engine model that has been implemented in NPSS and Simulink. It will also show the linear model generator s ability to approximate the dynamics of a nonlinear NPSS engine model.
Progress and Challenges in Coupled Hydrodynamic-Ecological Estuarine Modeling
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational po...
Investigation of the mechanical behaviour of the foot skin.
Fontanella, C G; Carniel, E L; Forestiero, A; Natali, A N
2014-11-01
The aim of this work was to provide computational tools for the characterization of the actual mechanical behaviour of foot skin, accounting for results from experimental testing and histological investigation. Such results show the typical features of skin mechanics, such as anisotropic configuration, almost incompressible behaviour, material and geometrical non linearity. The anisotropic behaviour is mainly determined by the distribution of collagen fibres along specific directions, usually identified as cleavage lines. To evaluate the biomechanical response of foot skin, a refined numerical model of the foot is developed. The overall mechanical behaviour of the skin is interpreted by a fibre-reinforced hyperelastic constitutive model and the orientation of the cleavage lines is implemented by a specific procedure. Numerical analyses that interpret typical loading conditions of the foot are performed. The influence of fibres orientation and distribution on skin mechanics is outlined also by a comparison with results using an isotropic scheme. A specific constitutive formulation is provided to characterize the mechanical behaviour of foot skin. The formulation is applied within a numerical model of the foot to investigate the skin functionality during typical foot movements. Numerical analyses developed accounting for the actual anisotropic configuration of the skin show lower maximum principal stress fields than results from isotropic analyses. The developed computational models provide reliable tools for the investigation of foot tissues functionality. Furthermore, the comparison between numerical results from anisotropic and isotropic models shows the optimal configuration of foot skin. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Optimization of droplets for UV-NIL using coarse-grain simulation of resist flow
NASA Astrophysics Data System (ADS)
Sirotkin, Vadim; Svintsov, Alexander; Zaitsev, Sergey
2009-03-01
A mathematical model and numerical method are described, which make it possible to simulate ultraviolet ("step and flash") nanoimprint lithography (UV-NIL) process adequately even using standard Personal Computers. The model is derived from 3D Navier-Stokes equations with the understanding that the resist motion is largely directed along the substrate surface and characterized by ultra-low values of the Reynolds number. By the numerical approximation of the model, a special finite difference method is applied (a coarse-grain method). A coarse-grain modeling tool for detailed analysis of resist spreading in UV-NIL at the structure-scale level is tested. The obtained results demonstrate the high ability of the tool to calculate optimal dispensing for given stamp design and process parameters. This dispensing provides uniform filled areas and a homogeneous residual layer thickness in UV-NIL.
UQTk Version 3.0.3 User Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh
2017-05-01
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
NASA Astrophysics Data System (ADS)
Shoev, G. V.; Bondar, Ye. A.; Oblapenko, G. P.; Kustova, E. V.
2016-03-01
Various issues of numerical simulation of supersonic gas flows with allowance for thermochemical nonequilibrium on the basis of fluid dynamic equations in the two-temperature approximation are discussed. The computational tool for modeling flows with thermochemical nonequilibrium is the commercial software package ANSYS Fluent with an additional userdefined open-code module. A comparative analysis of results obtained by various models of vibration-dissociation coupling in binary gas mixtures of nitrogen and oxygen is performed. Results of numerical simulations are compared with available experimental data.
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
Sandia National Laboratories analysis code data base
NASA Astrophysics Data System (ADS)
Peterson, C. W.
1994-11-01
Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.
Numerical modelling in biosciences using delay differential equations
NASA Astrophysics Data System (ADS)
Bocharov, Gennadii A.; Rihan, Fathalla A.
2000-12-01
Our principal purposes here are (i) to consider, from the perspective of applied mathematics, models of phenomena in the biosciences that are based on delay differential equations and for which numerical approaches are a major tool in understanding their dynamics, (ii) to review the application of numerical techniques to investigate these models. We show that there are prima facie reasons for using such models: (i) they have a richer mathematical framework (compared with ordinary differential equations) for the analysis of biosystem dynamics, (ii) they display better consistency with the nature of certain biological processes and predictive results. We analyze both the qualitative and quantitative role that delays play in basic time-lag models proposed in population dynamics, epidemiology, physiology, immunology, neural networks and cell kinetics. We then indicate suitable computational techniques for the numerical treatment of mathematical problems emerging in the biosciences, comparing them with those implemented by the bio-modellers.
Algorithms for the Fractional Calculus: A Selection of Numerical Methods
NASA Technical Reports Server (NTRS)
Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.
2003-01-01
Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.
A 3D Hydrodynamic Model for Heterogeneous Biofilms with Antimicrobial Persistence
2014-01-01
antimicrobial agents, providing a useful tool for analyzing the mechanism of biofilm persistence to antimicrobial agents in an aqueous environment. The numerical...mecha- nism of biofilm persistence to antimicrobial agents in an aqueous environment. The numerical result also confirms that the periodic dosing...We model the biofilm together with its surrounding aqueous environment as a mixture of complex fluids. The biofilm is consisted of the biomass
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
Challenges to Applying a Metamodel for Groundwater Flow Beyond Underlying Numerical Model Boundaries
NASA Astrophysics Data System (ADS)
Reeves, H. W.; Fienen, M. N.; Feinstein, D.
2015-12-01
Metamodels of environmental behavior offer opportunities for decision support, adaptive management, and increased stakeholder engagement through participatory modeling and model exploration. Metamodels are derived from calibrated, computationally demanding, numerical models. They may potentially be applied to non-modeled areas to provide screening or preliminary analysis tools for areas that do not yet have the benefit of more comprehensive study. In this decision-support mode, they may be fulfilling a role often accomplished by application of analytical solutions. The major challenge to transferring a metamodel to a non-modeled area is how to quantify the spatial data in the new area of interest in such a way that it is consistent with the data used to derive the metamodel. Tests based on transferring a metamodel derived from a numerical groundwater-flow model of the Lake Michigan Basin to other glacial settings across the northern U.S. show that the spatial scale of the numerical model must be appropriately scaled to adequately represent different settings. Careful GIS analysis of the numerical model, metamodel, and new area of interest is required for successful transfer of results.
The visualization of spatial uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, R.M.
1994-12-31
Geostatistical conditions simulation is gaining acceptance as a numerical modeling tool in the petroleum industry. Unfortunately, many of the new users of conditional simulation work with only one outcome or ``realization`` and ignore the many other outcomes that could be produced by their conditional simulation tools. 3-D visualization tools allow them to create very realistic images of this single outcome as reality. There are many methods currently available for presenting the uncertainty information from a family of possible outcomes; most of these, however, use static displays and many present uncertainty in a format that is not intuitive. This paper exploresmore » the visualization of uncertainty through dynamic displays that exploit the intuitive link between uncertainty and change by presenting the use with a constantly evolving model. The key technical challenge to such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such animation is possible and to show that such dynamic displays can be an effective tool in risk analysis for the petroleum industry.« less
Development of a Design Tool for Planning Aqueous Amendment Injection Systems
2012-08-01
Chemical Oxidation with Permanganate (MnO4- ) ...................................... 2 1.4 IMPLEMENTATION ISSUES...17 6.4 SS DESIGN TOOL DEVELOPMENT AND EVALUATION ........................... 19 7.0 CHEMICAL OXIDATION WITH PERMANGANATE ...21 7.1 NUMERICAL MODELING OF PERMANGANATE DISTRIBUTION ........... 21 7.2 CDISCO DEVELOPMENT AND EVALUATION
Analyzing asteroid reflectance spectra with numerical tools based on scattering simulations
NASA Astrophysics Data System (ADS)
Penttilä, Antti; Väisänen, Timo; Markkanen, Johannes; Martikainen, Julia; Gritsevich, Maria; Muinonen, Karri
2017-04-01
We are developing a set of numerical tools that can be used in analyzing the reflectance spectra of granular materials such as the regolith surface of atmosphereless Solar system objects. Our goal is to be able to explain, with realistic numerical scattering models, the spectral features arising when materials are intimately mixed together. We include the space-weathering -type effects in our simulations, i.e., mixing host mineral locally with small inclusions of another material in small proportions. Our motivation for this study comes from the present lack of such tools. The current common practice is to apply a semi-physical approximate model such as some variation of Hapke models [e.g., 1] or the Shkuratov model [2]. These models are expressed in a closed form so that they are relatively fast to apply. They are based on simplifications on the radiative transfer theory. The problem is that the validity of the model is not always guaranteed, and the derived physical properties related to particle scattering properties can be unrealistic [3]. We base our numerical tool into a chain of scattering simulations. Scattering properties of small inclusions inside an absorbing host matrix can be derived using exact methods solving the Maxwell equations of the system. The next step, scattering by a single regolith grain, is solved using a geometrical optics method accounting for surface reflections, internal absorption, and possibly the internal diffuse scattering. The third step involves the radiative transfer simulations of these regolith grains in a macroscopic planar element. The chain can be continued next with shadowing simulation over the target surface elements, and finally by integrating the bidirectional reflectance distribution function over the object's shape. Most of the tools in the proposed chain already exist, and one practical task for us is to tie these together into an easy-to-use toolchain that can be publicly distributed. We plan to open the abovementioned toolchain as a web-based open service. Acknowledgments: The research is funded by the ERC Advanced Grant No. 320773 (SAEMPL) References: [1] B. Hapke, Icarus 195, 918-926, 2008. [2] Yu. Shkuratov et al, Icarus 137, 235-246, 1999. [3] Yu. Shkuratov et al, JQSRT 113, 2431-2456, 2012. [4] K. Muinonen et al, JQSRT 110, 1628-1639, 2009.
Economic Consequence Analysis of Disasters: The ECAT Software Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Adam; Prager, Fynn; Chen, Zhenhua
This study develops a methodology for rapidly obtaining approximate estimates of the economic consequences from numerous natural, man-made and technological threats. This software tool is intended for use by various decision makers and analysts to obtain estimates rapidly. It is programmed in Excel and Visual Basic for Applications (VBA) to facilitate its use. This tool is called E-CAT (Economic Consequence Analysis Tool) and accounts for the cumulative direct and indirect impacts (including resilience and behavioral factors that significantly affect base estimates) on the U.S. economy. E-CAT is intended to be a major step toward advancing the current state of economicmore » consequence analysis (ECA) and also contributing to and developing interest in further research into complex but rapid turnaround approaches. The essence of the methodology involves running numerous simulations in a computable general equilibrium (CGE) model for each threat, yielding synthetic data for the estimation of a single regression equation based on the identification of key explanatory variables (threat characteristics and background conditions). This transforms the results of a complex model, which is beyond the reach of most users, into a "reduced form" model that is readily comprehensible. Functionality has been built into E-CAT so that its users can switch various consequence categories on and off in order to create customized profiles of economic consequences of numerous risk events. E-CAT incorporates uncertainty on both the input and output side in the course of the analysis.« less
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Wang, Taiping
This paper presents a modeling study conducted to evaluate tidal-stream energy extraction and its associated potential environmental impacts using a three-dimensional unstructured-grid coastal ocean model, which was coupled with a water-quality model and a tidal-turbine module.
Numerical simulations for active tectonic processes: increasing interoperability and performance
NASA Technical Reports Server (NTRS)
Donnellan, A.; Fox, G.; Rundle, J.; McLeod, D.; Tullis, T.; Grant, L.
2002-01-01
The objective of this project is to produce a system to fully model earthquake-related data. This task develops simulation and analysis tools to study the physics of earthquakes using state-of-the-art modeling.
Comparison of software tools for kinetic evaluation of chemical degradation data.
Ranke, Johannes; Wöltjen, Janina; Meinecke, Stefan
2018-01-01
For evaluating the fate of xenobiotics in the environment, a variety of degradation or environmental metabolism experiments are routinely conducted. The data generated in such experiments are evaluated by optimizing the parameters of kinetic models in a way that the model simulation fits the data. No comparison of the main software tools currently in use has been published to date. This article shows a comparison of numerical results as well as an overall, somewhat subjective comparison based on a scoring system using a set of criteria. The scoring was separately performed for two types of uses. Uses of type I are routine evaluations involving standard kinetic models and up to three metabolites in a single compartment. Evaluations involving non-standard model components, more than three metabolites or more than a single compartment belong to use type II. For use type I, usability is most important, while the flexibility of the model definition is most important for use type II. Test datasets were assembled that can be used to compare the numerical results for different software tools. These datasets can also be used to ensure that no unintended or erroneous behaviour is introduced in newer versions. In the comparison of numerical results, good agreement between the parameter estimates was observed for datasets with up to three metabolites. For the now unmaintained reference software DegKinManager/ModelMaker, and for OpenModel which is still under development, user options were identified that should be taken care of in order to obtain results that are as reliable as possible. Based on the scoring system mentioned above, the software tools gmkin, KinGUII and CAKE received the best scores for use type I. Out of the 15 software packages compared with respect to use type II, again gmkin and KinGUII were the first two, followed by the script based tool mkin, which is the technical basis for gmkin, and by OpenModel. Based on the evaluation using the system of criteria mentioned above and the comparison of numerical results for the suite of test datasets, the software tools gmkin, KinGUII and CAKE are recommended for use type I, and gmkin and KinGUII for use type II. For users that prefer to work with scripts instead of graphical user interfaces, mkin is recommended. For future software evaluations, it is recommended to include a measure for the total time that a typical user needs for a kinetic evaluation into the scoring scheme. It is the hope of the authors that the publication of test data, source code and overall rankings foster the evolution of useful and reliable software in the field.
Direct Numerical Simulations of Diffusive Staircases in the Arctic
2009-03-01
modeling is the simplest and most obvious tool for evaluating the mixing characteristics in the Arctic Ocean, and it will be extensively used in our...and Kinglear, in addition to Department of Defense (DoD) supercomputer clusters, Babbage, Davinci , and Midnight. Low resolution model runs were...Krishfield, R., Toole , J., Proshutinsky, A., & Timmermans, M.-L. (2008). Automated Ice Tethered Profilers for seawater observations under pack ice in
Efficient simulation of press hardening process through integrated structural and CFD analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palaniswamy, Hariharasudhan; Mondalek, Pamela; Wronski, Maciek
Press hardened steel parts are being increasingly used in automotive structures for their higher strength to meet safety standards while reducing vehicle weight to improve fuel consumption. However, manufacturing of sheet metal parts by press hardening process to achieve desired properties is extremely challenging as it involves complex interaction of plastic deformation, metallurgical change, thermal distribution, and fluid flow. Numerical simulation is critical for successful design of the process and to understand the interaction among the numerous process parameters to control the press hardening process in order to consistently achieve desired part properties. Until now there has been no integratedmore » commercial software solution that can efficiently model the complete process from forming of the blank, heat transfer between the blank and tool, microstructure evolution in the blank, heat loss from tool to the fluid that flows through water channels in the tools. In this study, a numerical solution based on Altair HyperWorks® product suite involving RADIOSS®, a non-linear finite element based structural analysis solver and AcuSolve®, an incompressible fluid flow solver based on Galerkin Least Square Finite Element Method have been utilized to develop an efficient solution for complete press hardening process design and analysis. RADIOSS is used to handle the plastic deformation, heat transfer between the blank and tool, and microstructure evolution in the blank during cooling. While AcuSolve is used to efficiently model heat loss from tool to the fluid that flows through water channels in the tools. The approach is demonstrated through some case studies.« less
New Tooling System for Forming Aluminum Beverage Can End Shell
NASA Astrophysics Data System (ADS)
Yamazaki, Koetsu; Otsuka, Takayasu; Han, Jing; Hasegawa, Takashi; Shirasawa, Taketo
2011-08-01
This paper proposes a new tooling system for forming shells of aluminum beverage can ends. At first, forming process of a conversional tooling system has been simulated using three-dimensional finite element models. Simulation results have been confirmed to be consistent with those of axisymmetric models, so simulations for further study have been performed using axisymmetric models to save computational time. A comparison shows that thinning of the shell formed by the proposed tooling system has been improved about 3.6%. Influences of the tool upmost surface profiles and tool initial positions in the new tooling system have been investigated and the design optimization method based on the numerical simulations has been then applied to search optimum design points, in order to minimize thinning subjected to the constraints of the geometrical dimensions of the shell. At last, the performance of the shell subjected to internal pressure has been confirmed to meet design requirements.
Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
Inducer analysis/pump model development
NASA Astrophysics Data System (ADS)
Cheng, Gary C.
1994-03-01
Current design of high performance turbopumps for rocket engines requires effective and robust analytical tools to provide design information in a productive manner. The main goal of this study was to develop a robust and effective computational fluid dynamics (CFD) pump model for general turbopump design and analysis applications. A finite difference Navier-Stokes flow solver, FDNS, which includes an extended k-epsilon turbulence model and appropriate moving zonal interface boundary conditions, was developed to analyze turbulent flows in turbomachinery devices. In the present study, three key components of the turbopump, the inducer, impeller, and diffuser, were investigated by the proposed pump model, and the numerical results were benchmarked by the experimental data provided by Rocketdyne. For the numerical calculation of inducer flows with tip clearance, the turbulence model and grid spacing are very important. Meanwhile, the development of the cross-stream secondary flow, generated by curved blade passage and the flow through tip leakage, has a strong effect on the inducer flow. Hence, the prediction of the inducer performance critically depends on whether the numerical scheme of the pump model can simulate the secondary flow pattern accurately or not. The impeller and diffuser, however, are dominated by pressure-driven flows such that the effects of turbulence model and grid spacing (except near leading and trailing edges of blades) are less sensitive. The present CFD pump model has been proved to be an efficient and robust analytical tool for pump design due to its very compact numerical structure (requiring small memory), fast turnaround computing time, and versatility for different geometries.
Inducer analysis/pump model development
NASA Technical Reports Server (NTRS)
Cheng, Gary C.
1994-01-01
Current design of high performance turbopumps for rocket engines requires effective and robust analytical tools to provide design information in a productive manner. The main goal of this study was to develop a robust and effective computational fluid dynamics (CFD) pump model for general turbopump design and analysis applications. A finite difference Navier-Stokes flow solver, FDNS, which includes an extended k-epsilon turbulence model and appropriate moving zonal interface boundary conditions, was developed to analyze turbulent flows in turbomachinery devices. In the present study, three key components of the turbopump, the inducer, impeller, and diffuser, were investigated by the proposed pump model, and the numerical results were benchmarked by the experimental data provided by Rocketdyne. For the numerical calculation of inducer flows with tip clearance, the turbulence model and grid spacing are very important. Meanwhile, the development of the cross-stream secondary flow, generated by curved blade passage and the flow through tip leakage, has a strong effect on the inducer flow. Hence, the prediction of the inducer performance critically depends on whether the numerical scheme of the pump model can simulate the secondary flow pattern accurately or not. The impeller and diffuser, however, are dominated by pressure-driven flows such that the effects of turbulence model and grid spacing (except near leading and trailing edges of blades) are less sensitive. The present CFD pump model has been proved to be an efficient and robust analytical tool for pump design due to its very compact numerical structure (requiring small memory), fast turnaround computing time, and versatility for different geometries.
Blended near-optimal tools for flexible water resources decision making
NASA Astrophysics Data System (ADS)
Rosenberg, David
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.
Modeling of the Global Water Cycle - Analytical Models
Yongqiang Liu; Roni Avissar
2005-01-01
Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...
Performance evaluation of Bragg coherent diffraction imaging
NASA Astrophysics Data System (ADS)
Öztürk, H.; Huang, X.; Yan, H.; Robinson, I. K.; Noyan, I. C.; Chu, Y. S.
2017-10-01
In this study, we present a numerical framework for modeling three-dimensional (3D) diffraction data in Bragg coherent diffraction imaging (Bragg CDI) experiments and evaluating the quality of obtained 3D complex-valued real-space images recovered by reconstruction algorithms under controlled conditions. The approach is used to systematically explore the performance and the detection limit of this phase-retrieval-based microscopy tool. The numerical investigation suggests that the superb performance of Bragg CDI is achieved with an oversampling ratio above 30 and a detection dynamic range above 6 orders. The observed performance degradation subject to the data binning processes is also studied. This numerical tool can be used to optimize experimental parameters and has the potential to significantly improve the throughput of Bragg CDI method.
Chemical Transport in a Fissured Rock: Verification of a Numerical Model
NASA Astrophysics Data System (ADS)
Rasmuson, A.; Narasimhan, T. N.; Neretnieks, I.
1982-10-01
Numerical models for simulating chemical transport in fissured rocks constitute powerful tools for evaluating the acceptability of geological nuclear waste repositories. Due to the very long-term, high toxicity of some nuclear waste products, the models are required to predict, in certain cases, the spatial and temporal distribution of chemical concentration less than 0.001% of the concentration released from the repository. Whether numerical models can provide such accuracies is a major question addressed in the present work. To this end we have verified a numerical model, TRUMP, which solves the advective diffusion equation in general three dimensions, with or without decay and source terms. The method is based on an integrated finite difference approach. The model was verified against known analytic solution of the one-dimensional advection-diffusion problem, as well as the problem of advection-diffusion in a system of parallel fractures separated by spherical particles. The studies show that as long as the magnitude of advectance is equal to or less than that of conductance for the closed surface bounding any volume element in the region (that is, numerical Peclet number <2), the numerical method can indeed match the analytic solution within errors of ±10-3% or less. The realistic input parameters used in the sample calculations suggest that such a range of Peclet numbers is indeed likely to characterize deep groundwater systems in granitic and ancient argillaceous systems. Thus TRUMP in its present form does provide a viable tool for use in nuclear waste evaluation studies. A sensitivity analysis based on the analytic solution suggests that the errors in prediction introduced due to uncertainties in input parameters are likely to be larger than the computational inaccuracies introduced by the numerical model. Currently, a disadvantage in the TRUMP model is that the iterative method of solving the set of simultaneous equations is rather slow when time constants vary widely over the flow region. Although the iterative solution may be very desirable for large three-dimensional problems in order to minimize computer storage, it seems desirable to use a direct solver technique in conjunction with the mixed explicit-implicit approach whenever possible. Work in this direction is in progress.
NASA Astrophysics Data System (ADS)
Fernandes, R.; Leitão, P. C.; Braunschweig, F.; Lourenço, F.; Galvão, P.; Neves, R.
2012-04-01
The increasing ship traffic and maritime transport of dangerous substances make it more difficult to significantly reduce the environmental, economic and social risks posed by potential spills, although the security rules are becoming more restrictive (ships with double hull, etc.) and the surveillance systems are becoming more developed (VTS, AIS). In fact, the problematic associated to spills is and will always be a main topic: spill events are continuously happening, most of them unknown for the general public because of their small scale impact, but with some of them (in a much smaller number) becoming authentic media phenomena in this information era, due to their large dimensions and environmental and social-economic impacts on ecosystems and local communities, and also due to some spectacular or shocking pictures generated. Hence, the adverse consequences posed by these type of accidents, increase the preoccupation of avoiding them in the future, or minimize their impacts, using not only surveillance and monitoring tools, but also increasing the capacity to predict the fate and behaviour of bodies, objects, or substances in the following hours after the accident - numerical models can have now a leading role in operational oceanography applied to safety and pollution response in the ocean because of their predictive potential. Search and rescue operation, oil, inert (ship debris, or floating containers), and HNS (hazardous and noxious substances) spills risk analysis are the main areas where models can be used. Model applications have been widely used in emergency or planning issues associated to pollution risks, and contingency and mitigation measures. Before a spill, in the planning stage, modelling simulations are used in environmental impact studies, or risk maps, using historical data, reference situations, and typical scenarios. After a spill, the use of fast and simple modelling applications allow to understand the fate and behaviour of the spilt substances, helping in the management of the crisis, in the distribution of response resources, or prioritizing specific areas. They can also be used for detection of pollution sources. However, the resources involved, and the scientific and technological levels needed in the manipulation of numerical models, had both limited the interoperability between operational models, monitoring tools and decision-support software tools. The increasing predictive capacity of metocean conditions and fate and behaviour of pollutants spilt at sea or costal zones, and the presence of monitoring tools like vessel traffic control systems, can both provide a safer support for decision-making in emergency or planning issues associated to pollution risk management, especially if used in an integrated way. Following this approach, and taking advantage of an integrated framework developed in ARCOPOL (www.arcopol.eu) and EASYCO (www.project-easy.info) projects, three innovative model-supported software tools were developed and applied in the Atlantic Area, and / or the Portuguese Coast. Two of these tools are used for spill model simulations - a web-based interface (EASYCO web bidirectional tool) and an advanced desktop application (MOHID Desktop Spill Simulator) - both of them allowing end user to have control over the model simulations. Parameters such as date and time of the event, location and oil spill volume are provided the users; these interactive tools also integrate best available metocean forecasts (waves, meteorological, hydrodynamics) from different institutions in the Atlantic Area. Metocean data are continuously gathered from remote THREDDS data servers (using OPENDAP) or ftp sites, and then automatically interpolated and pre-processed to be available for the simulators. These simulation tools developed can also import initial data and export results from/to remote servers, using OGC WFS services. Simulations are provided to end user in a matter of seconds, and thus, can be very useful in emergency situations. The backtracking modelling feature and the possibility of importing spill locations from remote servers with observed data (per example, from flight surveillance or remote sensing) allow the potential application to the evaluation of possible contamination sources. The third tool developed is an innovative system to dynamically produce quantified risk levels in real time, integrating best available information from numerical forecasts and the existing monitoring tools. This system provides coastal pollution risk levels associated to potential (or real) oil spill incidents, taking into account regional statistic information on vessel accidents and coastal sensitivity indexes (determined in EROCIPS project), real time vessel information (positioning, cargo type, speed and vessel type) obtained from AIS, best-available metocean numerical forecasts (hydrodynamics, meteorology - including visibility, wave conditions) and simulated scenarios by the oil spill fate and behaviour component of MOHID Water Modelling System (www.mohid.com). Different spill fate and behaviour simulations are continuously generated and processed in background (assuming hypothetical spills from vessels), based on variable vessel information, and metocean conditions, and results from these simulations are used in the quantification the consequences of potential spills. Dynamic Risk Tool was not designed to replace conventional mapping tools, but to complement that type of information with an innovative approach to risk mapping. Taking advantage of interoperability between forecasting models, oil spill simulations, AIS monitoring systems, statistical data and coastal vulnerability, this software can provide to end-users realtime risk levels, allowing an innovative approach to risk mapping, providing decision-makers with an improved decision support model and also an intelligent risk-based traffic monitoring. For instance, this tool allows the prioritisation of individual ships and geographical areas, and facilitates strategic and dynamic tug positioning. As referred, the risk levels are generated in realtime, and the historic results are kept in a database, allowing later risk analysis or compilations for specific seasons or regions, in order to obtain typical risk maps, etc. The integration with metocean modeling results (instead of using typical static scenarios), as well as continuous background oil spill modelling, provide a more realistic approach to the estimation of risk levels - the metocean conditions and oil spill behaviour are always different and specific, and it's virtually impossible to previously define those conditions even if several thousands of static scenarios were previously considered. This system was initially implemented in Portugal (ARCOPOL project) for oil spills. The implementation at different regions in the Atlantic and the adaptation to chemical spills will be executed in the scope of ARCOPOL+ project. The numerical model used for computing the fate and behaviour of spilled substances in all the tools developed (MOHID lagrangian & oil spill model from MOHID Water modelling System) was also subject of several adaptations and updates, in order to increase its adaptability to the developed tools - horizontal velocity due to Stokes Drift, vertical movement of oil substances, modelling of floating containers, backtracking modelling and a multi-solution approach (generating computational grid on-the-fly, and using the available information from the multiple metocean forecasting solutions available) are some of the main features recently implemented. The main purpose of these software tools are mainly to reduce the gap between the decision-makers and scientific modellers - although the correct analysis of model results usually requires a specialist, an operational model user should not loose most of the time converting and interpolating metocean results, preparing input data files, running models and post-processing results - rather than analysing results and producing different scenarios. The harmonization and standardization in respect to dissemination of numerical model outputs is a strategic effort for the modelling scientific community, because facilitates the application of their results in decision-support tools like the ones presented here.
Model Hosting for continuous updating and transparent Water Resources Management
NASA Astrophysics Data System (ADS)
Jódar, Jorge; Almolda, Xavier; Batlle, Francisco; Carrera, Jesús
2013-04-01
Numerical models have become a standard tool for water resources management. They are required for water volume bookkeeping and help in decision making. Nevertheless, numerical models are complex and they can be used only by highly qualified technicians, which are often far from the decision makers. Moreover, they need to be maintained. That is, they require updating of their state, by assimilation of measurements, natural and anthropic actions (e.g., pumping and weather data), and model parameters. Worst, their very complexity implies that are they viewed as obscure and far, which hinders transparency and governance. We propose internet model hosting as an alternative to overcome these limitations. The basic idea is to keep the model hosted in the cloud. The model is updated as new data (measurements and external forcing) becomes available, which ensures continuous maintenance, with a minimal human cost (only required to address modelling problems). Internet access facilitates model use not only by modellers, but also by people responsible for data gathering and by water managers. As a result, the model becomes an institutional tool shared by water agencies to help them not only in decision making for sustainable management of water resources, but also in generating a common discussion platform. By promoting intra-agency sharing, the model becomes the common official position of the agency, which facilitates commitment in their adopted decisions regarding water management. Moreover, by facilitating access to stakeholders and the general public, the state of the aquifer and the impacts of alternative decisions become transparent. We have developed a tool (GAC, Global Aquifer Control) to address the above requirements. The application has been developed using Cloud Computing technologies, which facilitates the above operations. That is, GAC automatically updates the numerical models with the new available measurements, and then simulates numerous management options as required. To this end the application generates as many computing virtual machines as needed, customizing their size (CPU, memory…) accounting for all the particular requirements of every numerical model. Results are presented from a quantitative point of view (i.e. groundwater as a resource), and also from a qualitative perspective (i.e. the use of solute concentrations in groundwater as an environmental vector). In both cases detailed mass balances time series are obtained which can be used jointly with all the input and output model data to solve water conflicts between the different actors using and/or affecting the groundwater of the aquifer.
An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application
USDA-ARS?s Scientific Manuscript database
A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...
Tempest: Tools for Addressing the Needs of Next-Generation Climate Models
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.; Pinheiro, M. C.; Fong, J.
2015-12-01
Tempest is a comprehensive simulation-to-science infrastructure that tackles the needs of next-generation, high-resolution, data intensive climate modeling activities. This project incorporates three key components: TempestDynamics, a global modeling framework for experimental numerical methods and high-performance computing; TempestRemap, a toolset for arbitrary-order conservative and consistent remapping between unstructured grids; and TempestExtremes, a suite of detection and characterization tools for identifying weather extremes in large climate datasets. In this presentation, the latest advances with the implementation of this framework will be discussed, and a number of projects now utilizing these tools will be featured.
A study on directional resistivity logging-while-drilling based on self-adaptive hp-FEM
NASA Astrophysics Data System (ADS)
Liu, Dejun; Li, Hui; Zhang, Yingying; Zhu, Gengxue; Ai, Qinghui
2014-12-01
Numerical simulation of resistivity logging-while-drilling (LWD) tool response provides guidance for designing novel logging instruments and interpreting real-time logging data. In this paper, based on self-adaptive hp-finite element method (hp-FEM) algorithm, we analyze LWD tool response against model parameters and briefly illustrate geosteering capabilities of directional resistivity LWD. Numerical simulation results indicate that the change of source spacing is of obvious influence on the investigation depth and detecting precision of resistivity LWD tool; the change of frequency can improve the resolution of low-resistivity formation and high-resistivity formation. The simulation results also indicate that the self-adaptive hp-FEM algorithm has good convergence speed and calculation accuracy to guide the geologic steering drilling and it is suitable to simulate the response of resistivity LWD tools.
Verification of a Multiphysics Toolkit against the Magnetized Target Fusion Concept
NASA Technical Reports Server (NTRS)
Thomas, Scott; Perrell, Eric; Liron, Caroline; Chiroux, Robert; Cassibry, Jason; Adams, Robert B.
2005-01-01
In the spring of 2004 the Advanced Concepts team at MSFC embarked on an ambitious project to develop a suite of modeling routines that would interact with one another. The tools would each numerically model a portion of any advanced propulsion system. The tools were divided by physics categories, hence the name multiphysics toolset. Currently most of the anticipated modeling tools have been created and integrated. Results are given in this paper for both a quarter nozzle with chemically reacting flow and the interaction of two plasma jets representative of a Magnetized Target Fusion device. The results have not been calibrated against real data as of yet, but this paper demonstrates the current capability of the multiphysics tool and planned future enhancements
Finite-difference time-domain modelling of through-the-Earth radio signal propagation
NASA Astrophysics Data System (ADS)
Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.
2015-12-01
This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.
Valdor, Paloma F; Gómez, Aina G; Velarde, Víctor; Puente, Araceli
2016-04-01
Oil spills are one of the most widespread problems in port areas (loading/unloading of bulk liquid, fuel supply). Specific environmental risk analysis procedures for diffuse oil sources that are based on the evolution of oil in the marine environment are needed. Diffuse sources such as oil spills usually present a lack of information, which makes the use of numerical models an arduous and occasionally impossible task. For that reason, a tool that can assess the risk of oil spills in near-shore areas by using Geographical Information System (GIS) is presented. The SPILL Tool provides immediate results by automating the process without miscalculation errors. The tool was developed using the Python and ArcGIS scripting library to build a non-ambiguous geoprocessing workflow. The SPILL Tool was implemented for oil facilities at Tarragona Harbor (NE Spain) and validated showing a satisfactory correspondence (around 0.60 RSR error index) with the results obtained using a 2D calibrated oil transport numerical model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Genetic Mouse Models to Gain Insight into Glaucoma: Past Results and Future Possibilities
Fernandes, Kimberly A.; Harder, Jeffrey M.; Williams, Pete A.; Rausch, Rebecca L.; Kiernan, Amy E.; Nair, K. Saidas; Anderson, Michael G.; John, Simon W.; Howell, Gareth R.; Libby, Richard T.
2015-01-01
While all forms of glaucoma are characterized by a specific pattern of retinal ganglion cell death, they are clinically divided into several distinct subclasses, including normal tension glaucoma, primary open angle glaucoma, congenital glaucoma, and secondary glaucoma. For each type of glaucoma there are likely numerous molecular pathways that control susceptibility to the disease. Given this complexity, a single animal model will never precisely model all aspects of all the different types of human glaucoma. Therefore, multiple animal models have been utilized to study glaucoma but more are needed. Because of the powerful genetic tools available to use in the laboratory mouse, it has proven to be a highly useful mammalian system for studying the pathophysiology of human disease. The similarity between human and mouse eyes coupled with the ability to use a combination of advanced cell biological and genetic tools in mice have led to a large increase in the number of studies using mice to model specific glaucoma phenotypes. Over the last decade, numerous new mouse models and genetic tools have emerged, providing important insight into the cell biology and genetics of glaucoma. In this review, we describe available mouse genetic models that can be used to study glaucoma-relevant disease/pathobiology. Furthermore, we discuss how these models have been used to gain insights into ocular hypertension (a major risk factor for glaucoma) and glaucomatous retinal ganglion cell death. Finally, the potential for developing new mouse models and using advanced genetic tools and resources for studying glaucoma are discussed. PMID:26116903
JAVA CLASSES FOR NONPROCEDURAL VARIOGRAM MONITORING
A set of Java classes was written for variogram modeling to support research for US EPA's Regional Vulnerability Assessment Program (ReVA). The modeling objectives of this research program are to use conceptual programming tools for numerical analysis for regional risk assessm...
Davis, Kyle W.; Putnam, Larry D.; LaBelle, Anneka R.
2015-01-01
The numerical model is a tool that could be used to better understand the flow system of the Ogallala and Arikaree aquifers, to approximate hydraulic heads in the aquifer, and to estimate discharge to rivers, springs, and seeps in the Pine Ridge Reservation area in Bennett, Jackson, and Shannon Counties. The model also is useful to help assess the response of the aquifer to additional stress, including potential increased well withdrawals and potential drought conditions.
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
NASA Astrophysics Data System (ADS)
Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo
2017-04-01
In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.
Modelling of peak temperature during friction stir processing of magnesium alloy AZ91
NASA Astrophysics Data System (ADS)
Vaira Vignesh, R.; Padmanaban, R.
2018-02-01
Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.
NASA Astrophysics Data System (ADS)
Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves
2016-04-01
The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004
Low Order Modeling Tools for Preliminary Pressure Gain Combustion Benefits Analyses
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2012-01-01
Pressure gain combustion (PGC) offers the promise of higher thermodynamic cycle efficiency and greater specific power in propulsion and power systems. This presentation describes a model, developed under a cooperative agreement between NASA and AFRL, for preliminarily assessing the performance enhancement and preliminary size requirements of PGC components either as stand-alone thrust producers or coupled with surrounding turbomachinery. The model is implemented in the Numerical Propulsion Simulation System (NPSS) environment allowing various configurations to be examined at numerous operating points. The validated model is simple, yet physics-based. It executes quickly in NPSS, yet produces realistic results.
NASA Astrophysics Data System (ADS)
Wright, D. J.; O'Dea, E.; Cushing, J. B.; Cuny, J. E.; Toomey, D. R.; Hackett, K.; Tikekar, R.
2001-12-01
The East Pacific Rise (EPR) from 9-10deg. N is currently our best-studied section of fast-spreading mid-ocean ridge. During several decades of investigation it has been explored by the full spectrum of ridge investigators, including chemists, biologists, geologists and geophysicists. These studies, and those that are ongoing, provide a wealth of observational data, results and data-driven theoretical (often numerical) studies that have not yet been fully utilized either by research scientists or by professional educators. While the situation is improving, a large amount of data, results, and related theoretical models still exist either in an inert, non-interactive form (e.g., journal publications) or as unlinked and currently incompatible computer data or algorithms. Infrastructure is needed not just for ready access to data, but linkage of disparate data sets (data to data) as well as data to models in order quantitatively evaluate hypotheses, refine numerical simulations, and explore new relations between observables. The prototype of a computational environment and toolset, called the Virtual Research Vessel (VRV), is being developed to provide scientists and educators with ready access to data, results and numerical models. While this effort is focused on the EPR 9N region, the resulting software tools and infrastructure should be helpful in establishing similar systems for other sections of the global mid-ocean ridge. Work in progress includes efforts to develop: (1) virtual database to incorporate diverse data types with domain-specific metadata into a global schema that allows web-query across different marine geology data sets, and an analogous declarative (database available) description of tools and models; (2) the ability to move data between GIS and the above DBMS, and tools to encourage data submission to archivesl (3) tools for finding and viewing archives, and translating between formats; (4) support for "computational steering" (tool composition) and model coupling (e.g., ability to run tool composition locally but access input data from the web, APIs to support coupling such as invoking programs that are running remotely, and help in writing data wrappers to publish programs); (5) support of migration paths for prototyped model coupling; and (6) export of marine geological data and data analysis to the undergraduate classroom (VRV-ET, "Educational Tool"). See the main VRV web site at http://oregonstate.edu/dept/vrv and the VRV-ET web site at: http://www.cs.uoregon.edu/research/vrv-et.
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able...
Teaching Aggregate Demand and Supply Models
ERIC Educational Resources Information Center
Wells, Graeme
2010-01-01
The author analyzes the inflation-targeting model that underlies recent textbook expositions of the aggregate demand-aggregate supply approach used in introductory courses in macroeconomics. He shows how numerical simulations of a model with inflation inertia can be used as a tool to help students understand adjustments in response to demand and…
NASA Astrophysics Data System (ADS)
Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam
2018-05-01
Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.
NASA Astrophysics Data System (ADS)
Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Lian, Yanping; Yu, Cheng; Liu, Zeliang; Yan, Jinhui; Wolff, Sarah; Wu, Hao; Ndip-Agbor, Ebot; Mozaffar, Mojtaba; Ehmann, Kornel; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam
2018-01-01
Additive manufacturing (AM) possesses appealing potential for manipulating material compositions, structures and properties in end-use products with arbitrary shapes without the need for specialized tooling. Since the physical process is difficult to experimentally measure, numerical modeling is a powerful tool to understand the underlying physical mechanisms. This paper presents our latest work in this regard based on comprehensive material modeling of process-structure-property relationships for AM materials. The numerous influencing factors that emerge from the AM process motivate the need for novel rapid design and optimization approaches. For this, we propose data-mining as an effective solution. Such methods—used in the process-structure, structure-properties and the design phase that connects them—would allow for a design loop for AM processing and materials. We hope this article will provide a road map to enable AM fundamental understanding for the monitoring and advanced diagnostics of AM processing.
NASA Astrophysics Data System (ADS)
Scarella, Gilles; Clatz, Olivier; Lanteri, Stéphane; Beaume, Grégory; Oudot, Steve; Pons, Jean-Philippe; Piperno, Sergo; Joly, Patrick; Wiart, Joe
2006-06-01
The ever-rising diffusion of cellular phones has brought about an increased concern for the possible consequences of electromagnetic radiation on human health. Possible thermal effects have been investigated, via experimentation or simulation, by several research projects in the last decade. Concerning numerical modeling, the power absorption in a user's head is generally computed using discretized models built from clinical MRI data. The vast majority of such numerical studies have been conducted using Finite Differences Time Domain methods, although strong limitations of their accuracy are due to heterogeneity, poor definition of the detailed structures of head tissues (staircasing effects), etc. In order to propose numerical modeling using Finite Element or Discontinuous Galerkin Time Domain methods, reliable automated tools for the unstructured discretization of human heads are also needed. Results presented in this article aim at filling the gap between human head MRI images and the accurate numerical modeling of wave propagation in biological tissues and its thermal effects. To cite this article: G. Scarella et al., C. R. Physique 7 (2006).
Mathematical, Constitutive and Numerical Modelling of Catastrophic Landslides and Related Phenomena
NASA Astrophysics Data System (ADS)
Pastor, M.; Fernández Merodo, J. A.; Herreros, M. I.; Mira, P.; González, E.; Haddad, B.; Quecedo, M.; Tonni, L.; Drempetic, V.
2008-02-01
Mathematical and numerical models are a fundamental tool for predicting the behaviour of geostructures and their interaction with the environment. The term “mathematical model” refers to a mathematical description of the more relevant physical phenomena which take place in the problem being analyzed. It is indeed a wide area including models ranging from the very simple ones for which analytical solutions can be obtained to those more complicated requiring the use of numerical approximations such as the finite element method. During the last decades, mathematical, constitutive and numerical models have been very much improved and today their use is widespread both in industry and in research. One special case is that of fast catastrophic landslides, for which simplified methods are not able to provide accurate solutions in many occasions. Moreover, many finite element codes cannot be applied for propagation of the mobilized mass. The purpose of this work is to present an overview of the different alternative mathematical and numerical models which can be applied to both the initiation and propagation mechanisms of fast catastrophic landslides and other related problems such as waves caused by landslides.
The numerical modelling of falling film thickness flow on horizontal tubes
NASA Astrophysics Data System (ADS)
Hassan, I. A.; Sadikin, A.; Isa, N. Mat
2017-04-01
This paper presents a computational modelling of water falling film flowing over horizontal tubes. The objective of this study is to use numerical predictions for comparing the film thickness along circumferential direction of tube on 2-D CFD models. The results are then validated with a theoretical result in previous literatures. A comprehensive design of 2-D models have been developed according to the real application and actual configuration of the falling film evaporator as well as previous experimental parameters. A computational modelling of the water falling film is presented with the aid of Ansys Fluent software. The Volume of Fluid (VOF) technique is adapted in this analysis since its capabilities of determining the film thickness on tubes surface is highly reliable. The numerical analysis is carried out under influence of ambient pressures at temperature of 27 °C. Three types of CFD numerical models were analyzed in this simulation with inter tube spacing of 30 mm, 20 mm and 10 mm respectively. The use of a numerical simulation tool on water falling film has resulted in a detailed investigation of film thickness. Based on the numerical simulated results, it is found that the average values of water film thickness for each model are 0.53 mm, 0.58 mm, and 0.63 mm.
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems
NASA Astrophysics Data System (ADS)
Cortes, J.; Debussche, A.; Toumi, I.
1998-12-01
Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.
NASA Technical Reports Server (NTRS)
Chin, Jeffrey C.; Csank, Jeffrey T.
2016-01-01
The Tool for Turbine Engine Closed-Loop Transient Analysis (TTECTrA ver2) is a control design tool thatenables preliminary estimation of transient performance for models without requiring a full nonlinear controller to bedesigned. The program is compatible with subsonic engine models implemented in the MATLAB/Simulink (TheMathworks, Inc.) environment and Numerical Propulsion System Simulation (NPSS) framework. At a specified flightcondition, TTECTrA will design a closed-loop controller meeting user-defined requirements in a semi or fully automatedfashion. Multiple specifications may be provided, in which case TTECTrA will design one controller for each, producing acollection of controllers in a single run. Each resulting controller contains a setpoint map, a schedule of setpointcontroller gains, and limiters; all contributing to transient characteristics. The goal of the program is to providesteady-state engine designers with more immediate feedback on the transient engine performance earlier in the design cycle.
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
COMMUNITY-SCALE MODELING FOR AIR TOXICS AND HOMELAND SECURITY
The purpose of this task is to develop and evaluate numerical and physical modeling tools for simulating ambient concentrations of airborne substances in urban settings at spatial scales ranging from <1-10 km. Research under this task will support client needs in human exposure ...
A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows
NASA Astrophysics Data System (ADS)
Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi
2016-09-01
Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).
High resolution modelling of extreme precipitation events in urban areas
NASA Astrophysics Data System (ADS)
Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave
2015-04-01
The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with significant soil consolidation and the low-lying areas are prone to urban flooding. The simulation results are compared with measurements in the sewer network. References [1] Guus S. Stelling G.S., 2012. Quadtree flood simulations with subgrid digital elevation models. Water Management 165 (WM1):1329-1354. [2] Vincenzo Cassuli and Guus S. Stelling, 2013. A semi-implicit numerical model for urban drainage systems. International Journal for Numerical Methods in Fluids. Vol. 73:600-614. DOI: 10.1002/fld.3817
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
Shooting Free Throws, Probability, and the Golden Ratio
ERIC Educational Resources Information Center
Goodman, Terry
2010-01-01
Part of the power of algebra is that it provides students with tools that they can use to model a variety of problems and applications. Such modeling requires them to understand patterns and choose from a variety of representations--numeric, graphical, symbolic--to construct a model that accurately reflects the relationships found in the original…
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.
2013-12-01
Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.
Model reduction of the numerical analysis of Low Impact Developments techniques
NASA Astrophysics Data System (ADS)
Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia
2017-04-01
Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.
NASA-Chinese Aeronautical Establishment (CAE) Symposium
NASA Technical Reports Server (NTRS)
1986-01-01
Several topics relative to combustion research are discussed. A numerical study of combustion processes in afterburners; the modeling of turbulent, reactive flow; gas turbine research; modeling of dilution jet flow fields; and chemical shock tubes as tools for studying high-temperature chemical kinetics are among the topics covered.
Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan
2015-01-01
Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed.
2015-01-01
Background Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. Methods In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Results Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. Conclusions The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed. PMID:26356007
Chemical transport in a fissured rock: Verification of a numerical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmuson, A.; Narasimhan, T. N.; Neretnieks, I.
1982-10-01
Numerical models for simulating chemical transport in fissured rocks constitute powerful tools for evaluating the acceptability of geological nuclear waste repositories. Due to the very long-term, high toxicity of some nuclear waste products, the models are required to predict, in certain cases, the spatial and temporal distribution of chemical concentration less than 0.001% of the concentration released from the repository. Whether numerical models can provide such accuracies is a major question addressed in the present work. To this end, we have verified a numerical model, TRUMP, which solves the advective diffusion equation in general three dimensions with or without decaymore » and source terms. The method is based on an integrated finite-difference approach. The model was verified against known analytic solution of the one-dimensional advection-diffusion problem as well as the problem of advection-diffusion in a system of parallel fractures separated by spherical particles. The studies show that as long as the magnitude of advectance is equal to or less than that of conductance for the closed surface bounding any volume element in the region (that is, numerical Peclet number <2), the numerical method can indeed match the analytic solution within errors of ±10{sup -3} % or less. The realistic input parameters used in the sample calculations suggest that such a range of Peclet numbers is indeed likely to characterize deep groundwater systems in granitic and ancient argillaceous systems. Thus TRUMP in its present form does provide a viable tool for use in nuclear waste evaluation studies. A sensitivity analysis based on the analytic solution suggests that the errors in prediction introduced due to uncertainties in input parameters is likely to be larger than the computational inaccuracies introduced by the numerical model. Currently, a disadvantage in the TRUMP model is that the iterative method of solving the set of simultaneous equations is rather slow when time constants vary widely over the flow region. Although the iterative solution may be very desirable for large three-dimensional problems in order to minimize computer storage, it seems desirable to use a direct solver technique in conjunction with the mixed explicit-implicit approach whenever possible. work in this direction is in progress.« less
Analysis of the electromagnetic wave resistivity tool in deviated well drilling
NASA Astrophysics Data System (ADS)
Zhang, Yumei; Xu, Lijun; Cao, Zhang
2014-04-01
Electromagnetic wave resistivity (EWR) tools are used to provide real-time measurements of resistivity in the formation around the tool in Logging While Drilling (LWD). In this paper, the acquired resistivity information in the formation is analyzed to extract more information, including dipping angle and azimuth direction of the drill. A finite element (FM) model of EWR tool working in layered earth formations is established. Numerical analysis and FM simulations are employed to analyze the amplitude ratio and phase difference between the voltages measured at the two receivers of the EWR tool in deviated well drilling.
Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz
2015-06-10
A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.
Numerical studies of the polymer melt flow in the extruder screw channel and the forming tool
NASA Astrophysics Data System (ADS)
Ershov, S. V.; Trufanova, N. M.
2017-06-01
To date, polymer compositions based on polyethylene or PVC is widely used as insulating materials. These materials processing conjugate with a number of problems during selection of the rational extrusion regimes. To minimize the time and cost when determining the technological regime uses mathematical modeling techniques. The paper discusses heat and mass transfer processes in the extruder screw channel, output adapter and the cable head. During the study were determined coefficients for three rheological models based on obtained viscosity vs. shear rate experimental data. Also a comparative analysis of this viscosimetric laws application possibility for studying polymer melt flow during its processing on the extrusion equipment was held. As a result of numerical study the temperature, viscosity and shear rate fields in the extruder screw channel and forming tool were obtained.
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
NASA Astrophysics Data System (ADS)
Juez, C.; Battisacco, E.; Schleiss, A. J.; Franca, M. J.
2016-06-01
The artificial replenishment of sediment is used as a method to re-establish sediment continuity downstream of a dam. However, the impact of this technique on the hydraulics conditions, and resulting bed morphology, is yet to be understood. Several numerical tools have been developed during last years for modeling sediment transport and morphology evolution which can be used for this application. These models range from 1D to 3D approaches: the first being over simplistic for the simulation of such a complex geometry; the latter requires often a prohibitive computational effort. However, 2D models are computationally efficient and in these cases may already provide sufficiently accurate predictions of the morphology evolution caused by the sediment replenishment in a river. Here, the 2D shallow water equations in combination with the Exner equation are solved by means of a weak-coupled strategy. The classical friction approach considered for reproducing the bed channel roughness has been modified to take into account the morphological effect of replenishment which provokes a channel bed fining. Computational outcomes are compared with four sets of experimental data obtained from several replenishment configurations studied in the laboratory. The experiments differ in terms of placement volume and configuration. A set of analysis parameters is proposed for the experimental-numerical comparison, with particular attention to the spreading, covered surface and travel distance of placed replenishment grains. The numerical tool is reliable in reproducing the overall tendency shown by the experimental data. The effect of fining roughness is better reproduced with the approach herein proposed. However, it is also highlighted that the sediment clusters found in the experiment are not well numerically reproduced in the regions of the channel with a limited number of sediment grains.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
Personal computer study of finite-difference methods for the transonic small disturbance equation
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1989-01-01
Calculation of unsteady flow phenomena requires careful attention to the numerical treatment of the governing partial differential equations. The personal computer provides a convenient and useful tool for the development of meshes, algorithms, and boundary conditions needed to provide time accurate solution of these equations. The one-dimensional equation considered provides a suitable model for the study of wave propagation in the equations of transonic small disturbance potential flow. Numerical results for effects of mesh size, extent, and stretching, time step size, and choice of far-field boundary conditions are presented. Analysis of the discretized model problem supports these numerical results. Guidelines for suitable mesh and time step choices are given.
Fire and Smoke Model Evaluation Experiment (FASMEE): Modeling gaps and data needs
Yongqiang Liu; Adam Kochanski; Kirk Baker; Ruddy Mell; Rodman Linn; Ronan Paugam; Jan Mandel; Aime Fournier; Mary Ann Jenkins; Scott Goodrick; Gary Achtemeier; Andrew Hudak; Matthew Dickson; Brian Potter; Craig Clements; Shawn Urbanski; Roger Ottmar; Narasimhan Larkin; Timothy Brown; Nancy French; Susan Prichard; Adam Watts; Derek McNamara
2017-01-01
Fire and smoke models are numerical tools for simulating fire behavior, smoke dynamics, and air quality impacts of wildland fires. Fire models are developed based on the fundamental chemistry and physics of combustion and fire spread or statistical analysis of experimental data (Sullivan 2009). They provide information on fire spread and fuel consumption for safe and...
Carswell, Dave; Hilton, Andy; Chan, Chris; McBride, Diane; Croft, Nick; Slone, Avril; Cross, Mark; Foster, Graham
2013-08-01
The objective of this study was to demonstrate the potential of Computational Fluid Dynamics (CFD) simulations in predicting the levels of haemolysis in ventricular assist devices (VADs). Three different prototypes of a radial flow VAD have been examined experimentally and computationally using CFD modelling to assess device haemolysis. Numerical computations of the flow field were computed using a CFD model developed with the use of the commercial software Ansys CFX 13 and a set of custom haemolysis analysis tools. Experimental values for the Normalised Index of Haemolysis (NIH) have been calculated as 0.020 g/100 L, 0.014 g/100 L and 0.0042 g/100 L for the three designs. Numerical analysis predicts an NIH of 0.021 g/100 L, 0.017 g/100 L and 0.0057 g/100 L, respectively. The actual differences between experimental and numerical results vary between 0.0012 and 0.003 g/100 L, with a variation of 5% for Pump 1 and slightly larger percentage differences for the other pumps. The work detailed herein demonstrates how CFD simulation and, more importantly, the numerical prediction of haemolysis may be used as an effective tool in order to help the designers of VADs manage the flow paths within pumps resulting in a less haemolytic device. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
Control/structure interaction conceptual design tool
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1990-01-01
The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.
DOT National Transportation Integrated Search
2011-12-01
This study is concerned with developing new modeling tools for predicting the response of the new Kealakaha : Stream Bridge to static and dynamic loads, including seismic shaking. The bridge will span 220 meters, with the : deck structure being curve...
Bridging groundwater models and decision support with a Bayesian network
Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert
2013-01-01
Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.
Using Virtualization to Integrate Weather, Climate, and Coastal Science Education
NASA Astrophysics Data System (ADS)
Davis, J. R.; Paramygin, V. A.; Figueiredo, R.; Sheng, Y.
2012-12-01
To better understand and communicate the important roles of weather and climate on the coastal environment, a unique publically available tool is being developed to support research, education, and outreach activities. This tool uses virtualization technologies to facilitate an interactive, hands-on environment in which students, researchers, and general public can perform their own numerical modeling experiments. While prior efforts have focused solely on the study of the coastal and estuary environments, this effort incorporates the community supported weather and climate model (WRF-ARW) into the Coastal Science Educational Virtual Appliance (CSEVA), an education tool used to assist in the learning of coastal transport processes; storm surge and inundation; and evacuation modeling. The Weather Research and Forecasting (WRF) Model is a next-generation, community developed and supported, mesoscale numerical weather prediction system designed to be used internationally for research, operations, and teaching. It includes two dynamical solvers (ARW - Advanced Research WRF and NMM - Nonhydrostatic Mesoscale Model) as well as a data assimilation system. WRF-ARW is the ARW dynamics solver combined with other components of the WRF system which was developed primarily at NCAR, community support provided by the Mesoscale and Microscale Meteorology (MMM) division of National Center for Atmospheric Research (NCAR). Included with WRF is the WRF Pre-processing System (WPS) which is a set of programs to prepare input for real-data simulations. The CSEVA is based on the Grid Appliance (GA) framework and is built using virtual machine (VM) and virtual networking technologies. Virtualization supports integration of an operating system, libraries (e.g. Fortran, C, Perl, NetCDF, etc. necessary to build WRF), web server, numerical models/grids/inputs, pre-/post-processing tools (e.g. WPS / RIP4 or UPS), graphical user interfaces, "Cloud"-computing infrastructure and other tools into a single ready-to-use package. Thus, the previous ornery task of setting up and compiling these tools becomes obsolete and the research, educator or student can focus on using the tools to study the interactions between weather, climate and the coastal environment. The incorporation of WRF into the CSEVA has been designed to be synergistic with the extensive online tutorials and biannual tutorials hosted by NCAR. Included are working examples of the idealized test simulations provided with WRF (2D sea breeze and squalls, a large eddy simulation, a Held and Suarez simulation, etc.) To demonstrate the integration of weather, coastal and coastal science education, example applications are being developed to demonstrate how the system can be used to couple a coastal and estuarine circulation, transport and storm surge model with downscale reanalysis weather and future climate predictions. Documentation, tutorials and the enhanced CSEVA itself will be found on the web at: http://cseva.coastal.ufl.edu.
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able ...
NASA Astrophysics Data System (ADS)
Bouchet, F.; Laurie, J.; Zaboronski, O.
2012-12-01
We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.
Physics-based subsurface visualization of human tissue.
Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert
2007-01-01
In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.
Virtual Power Electronics: Novel Software Tools for Design, Modeling and Education
NASA Astrophysics Data System (ADS)
Hamar, Janos; Nagy, István; Funato, Hirohito; Ogasawara, Satoshi; Dranga, Octavian; Nishida, Yasuyuki
The current paper is dedicated to present browser-based multimedia-rich software tools and e-learning curriculum to support the design and modeling process of power electronics circuits and to explain sometimes rather sophisticated phenomena. Two projects will be discussed. The so-called Inetele project is financed by the Leonardo da Vinci program of the European Union (EU). It is a collaborative project between numerous EU universities and institutes to develop state-of-the art curriculum in Electrical Engineering. Another cooperative project with participation of Japanese, European and Australian institutes focuses especially on developing e-learning curriculum, interactive design and modeling tools, furthermore on development of a virtual laboratory. Snapshots from these two projects will be presented.
BASIMO - Borehole Heat Exchanger Array Simulation and Optimization Tool
NASA Astrophysics Data System (ADS)
Schulte, Daniel O.; Bastian, Welsch; Wolfram, Rühaak; Kristian, Bär; Ingo, Sass
2017-04-01
Arrays of borehole heat exchangers are an increasingly popular source for renewable energy. Furthermore, they can serve as borehole thermal energy storage (BTES) systems for seasonally fluctuating heat sources like solar thermal energy or district heating grids. The high temperature level of these heat sources prohibits the use of the shallow subsurface for environmental reasons. Therefore, deeper reservoirs have to be accessed instead. The increased depth of the systems results in high investment costs and has hindered the implementation of this technology until now. Therefore, research of medium deep BTES systems relies on numerical simulation models. Current simulation tools cannot - or only to some extent - describe key features like partly insulated boreholes unless they run fully discretized models of the borehole heat exchangers. However, fully discretized models often come at a high computational cost, especially for large arrays of borehole heat exchangers. We give an update on the development of BASIMO: a tool, which uses one dimensional thermal resistance and capacity models for the borehole heat exchangers coupled with a numerical finite element model for the subsurface heat transport in a dual-continuum approach. An unstructured tetrahedral mesh bypasses the limitations of structured grids for borehole path geometries, while the thermal resistance and capacity model is improved to account for borehole heat exchanger properties changing with depth. Thereby, partly insulated boreholes can be considered in the model. Furthermore, BASIMO can be used to improve the design of BTES systems: the tool allows for automated parameter variations and is readily coupled to other code like mathematical optimization algorithms. Optimization can be used to determine the required minimum system size or to increase the system performance.
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaleel, Mohammad A.; Lin, Zijing; Singh, Prabhakar
2004-05-03
A 3D simulation tool for modeling solid oxide fuel cells is described. The tool combines the versatility and efficiency of a commercial finite element analysis code, MARC{reg_sign}, with an in-house developed robust and flexible electrochemical (EC) module. Based upon characteristic parameters obtained experimentally and assigned by the user, the EC module calculates the current density distribution, heat generation, and fuel and oxidant species concentration, taking the temperature profile provided by MARC{reg_sign} and operating conditions such as the fuel and oxidant flow rate and the total stack output voltage or current as the input. MARC{reg_sign} performs flow and thermal analyses basedmore » on the initial and boundary thermal and flow conditions and the heat generation calculated by the EC module. The main coupling between MARC{reg_sign} and EC is for MARC{reg_sign} to supply the temperature field to EC and for EC to give the heat generation profile to MARC{reg_sign}. The loosely coupled, iterative scheme is advantageous in terms of memory requirement, numerical stability and computational efficiency. The coupling is iterated to self-consistency for a steady-state solution. Sample results for steady states as well as the startup process for stacks with different flow designs are presented to illustrate the modeling capability and numerical performance characteristic of the simulation tool.« less
NASA Astrophysics Data System (ADS)
Xu, Y.; Jones, A. D.; Rhoades, A.
2017-12-01
Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.
NASA Astrophysics Data System (ADS)
Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.
2017-01-01
The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.
Examining the Effect of the Die Angle on Tool Load and Wear in the Extrusion Process
NASA Astrophysics Data System (ADS)
Nowotyńska, Irena; Kut, Stanisław
2014-04-01
The tool durability is a crucial factor in each manufacturing process, and this also includes the extrusion process. Striving to achieve the higher product quality should be accompanied by a long-term tool life and production cost reduction. This article presents the comparative research of load and wear of die at various angles of working cone during the concurrent extrusion. The numerical calculations of a tool load during the concurrent extrusion were performed using the MSC MARC software using the finite element method (FEM). Archard model was used to determine and compare die wear. This model was implemented in the software using the FEM. The examined tool deformations and stress distribution were determined based on the performed analyses. The die wear depth at various working cone angles was determined. Properly shaped die has an effect on the extruded material properties, but also controls loads, elastic deformation, and the tool life.
Modeling languages for biochemical network simulation: reaction vs equation based approaches.
Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya
2010-01-01
Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Walker, R. J.; Ashour-Abdalla, M.
1995-01-01
We report on the development of an interactive system for visualizing and analyzing numerical simulation results. This system is based on visualization modules which use the Application Visualization System (AVS) and the NCAR graphics packages. Examples from recent simulations are presented to illustrate how these modules can be used for displaying and manipulating simulation results to facilitate their comparison with phenomenological model results and observations.
Numerical framework for the modeling of electrokinetic flows
NASA Astrophysics Data System (ADS)
Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.
1998-09-01
This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
Optimizing romanian maritime coastline using mathematical model Litpack
NASA Astrophysics Data System (ADS)
Anton, I. A.; Panaitescu, M.; Panaitescu, F. V.
2017-08-01
There are many methods and tools to study shoreline change in coastal engineering. LITPACK is a numerical model included in MIKE software developed by DHI (Danish Hydraulic Institute). With this matehematical model we can simulate coastline evolution and profile along beach. Research and methodology: the paper contents location of the study area, the current status of Midia-Mangalia shoreline, protection objectives, the changes of shoreline after having protected constructions. In this paper are presented numerical and graphycal results obtained with this model for studying the romanian maritime coastline in area MIDIA-MANGALIA: non-cohesive sediment transport, long-shore current and littoral drift, coastline evolution, crossshore profile evolution, the development of the coastline position in time.
Mestres, M; Sierra, J P; Mösso, C; Sánchez-Arcilla, A
2010-06-01
The proximity of commercial harbours to residential areas and the growing environmental awareness of society have led most port authorities to include environmental management within their administration plan. Regarding water quality, it is necessary to have the capacity and tools to deal with contamination episodes that may damage marine ecosystems and human health, but also affect the normal functioning of harbours. This paper presents a description of the main pollutant sources in Tarragona Harbour (Spain), and a numerical analysis of several pollution episodes based on the Port Authority's actual environmental concerns. The results show that pollution generated inside the harbour tends to remain confined within the port, whereas it is very likely that oil spills from a nearby monobuoy may affect the neighbouring beaches. The present combination of numerical models proves itself a useful tool to assess the environmental risk associated to harbour activities and potential pollution spills.
Mechanism-Based FE Simulation of Tool Wear in Diamond Drilling of SiCp/Al Composites.
Xiang, Junfeng; Pang, Siqin; Xie, Lijing; Gao, Feinong; Hu, Xin; Yi, Jie; Hu, Fang
2018-02-07
The aim of this work is to analyze the micro mechanisms underlying the wear of macroscale tools during diamond machining of SiC p /Al6063 composites and to develop the mechanism-based diamond wear model in relation to the dominant wear behaviors. During drilling, high volume fraction SiC p /Al6063 composites containing Cu, the dominant wear mechanisms of diamond tool involve thermodynamically activated physicochemical wear due to diamond-graphite transformation catalyzed by Cu in air atmosphere and mechanically driven abrasive wear due to high-frequency scrape of hard SiC reinforcement on tool surface. An analytical diamond wear model, coupling Usui abrasive wear model and Arrhenius extended graphitization wear model was proposed and implemented through a user-defined subroutine for tool wear estimates. Tool wear estimate in diamond drilling of SiC p /Al6063 composites was achieved by incorporating the combined abrasive-chemical tool wear subroutine into the coupled thermomechanical FE model of 3D drilling. The developed drilling FE model for reproducing diamond tool wear was validated for feasibility and reliability by comparing numerically simulated tool wear morphology and experimentally observed results after drilling a hole using brazed polycrystalline diamond (PCD) and chemical vapor deposition (CVD) diamond coated tools. A fairly good agreement of experimental and simulated results in cutting forces, chip and tool wear morphologies demonstrates that the developed 3D drilling FE model, combined with a subroutine for diamond tool wear estimate can provide a more accurate analysis not only in cutting forces and chip shape but also in tool wear behavior during drilling SiC p /Al6063 composites. Once validated and calibrated, the developed diamond tool wear model in conjunction with other machining FE models can be easily extended to the investigation of tool wear evolution with various diamond tool geometries and other machining processes in cutting different workpiece materials.
Mechanism-Based FE Simulation of Tool Wear in Diamond Drilling of SiCp/Al Composites
Xiang, Junfeng; Pang, Siqin; Xie, Lijing; Gao, Feinong; Hu, Xin; Yi, Jie; Hu, Fang
2018-01-01
The aim of this work is to analyze the micro mechanisms underlying the wear of macroscale tools during diamond machining of SiCp/Al6063 composites and to develop the mechanism-based diamond wear model in relation to the dominant wear behaviors. During drilling, high volume fraction SiCp/Al6063 composites containing Cu, the dominant wear mechanisms of diamond tool involve thermodynamically activated physicochemical wear due to diamond-graphite transformation catalyzed by Cu in air atmosphere and mechanically driven abrasive wear due to high-frequency scrape of hard SiC reinforcement on tool surface. An analytical diamond wear model, coupling Usui abrasive wear model and Arrhenius extended graphitization wear model was proposed and implemented through a user-defined subroutine for tool wear estimates. Tool wear estimate in diamond drilling of SiCp/Al6063 composites was achieved by incorporating the combined abrasive-chemical tool wear subroutine into the coupled thermomechanical FE model of 3D drilling. The developed drilling FE model for reproducing diamond tool wear was validated for feasibility and reliability by comparing numerically simulated tool wear morphology and experimentally observed results after drilling a hole using brazed polycrystalline diamond (PCD) and chemical vapor deposition (CVD) diamond coated tools. A fairly good agreement of experimental and simulated results in cutting forces, chip and tool wear morphologies demonstrates that the developed 3D drilling FE model, combined with a subroutine for diamond tool wear estimate can provide a more accurate analysis not only in cutting forces and chip shape but also in tool wear behavior during drilling SiCp/Al6063 composites. Once validated and calibrated, the developed diamond tool wear model in conjunction with other machining FE models can be easily extended to the investigation of tool wear evolution with various diamond tool geometries and other machining processes in cutting different workpiece materials. PMID:29414839
The Learning Journal Bridge: From Classroom Concepts to Leadership Practices
ERIC Educational Resources Information Center
Maellaro, Rosemary
2013-01-01
The value of reflective writing assignments as learning tools for business students has been well-established. While the management education literature includes numerous examples of such assignments that are based on Kolb's (1984) experiential learning model, many of them engage only the first two phases of the model. When students do not move…
Calibration of a γ- Re θ transition model and its application in low-speed flows
NASA Astrophysics Data System (ADS)
Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song
2014-12-01
The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.
A study of unstable rock failures using finite difference and discrete element methods
NASA Astrophysics Data System (ADS)
Garvey, Ryan J.
Case histories in mining have long described pillars or faces of rock failing violently with an accompanying rapid ejection of debris and broken material into the working areas of the mine. These unstable failures have resulted in large losses of life and collapses of entire mine panels. Modern mining operations take significant steps to reduce the likelihood of unstable failure, however eliminating their occurrence is difficult in practice. Researchers over several decades have supplemented studies of unstable failures through the application of various numerical methods. The direction of the current research is to extend these methods and to develop improved numerical tools with which to study unstable failures in underground mining layouts. An extensive study is first conducted on the expression of unstable failure in discrete element and finite difference methods. Simulated uniaxial compressive strength tests are run on brittle rock specimens. Stable or unstable loading conditions are applied onto the brittle specimens by a pair of elastic platens with ranging stiffnesses. Determinations of instability are established through stress and strain histories taken for the specimen and the system. Additional numerical tools are then developed for the finite difference method to analyze unstable failure in larger mine models. Instability identifiers are established for assessing the locations and relative magnitudes of unstable failure through measures of rapid dynamic motion. An energy balance is developed which calculates the excess energy released as a result of unstable equilibria in rock systems. These tools are validated through uniaxial and triaxial compressive strength tests and are extended to models of coal pillars and a simplified mining layout. The results of the finite difference simulations reveal that the instability identifiers and excess energy calculations provide a generalized methodology for assessing unstable failures within potentially complex mine models. These combined numerical tools may be applied in future studies to design primary and secondary supports in bump-prone conditions, evaluate retreat mining cut sequences, asses pillar de-stressing techniques, or perform backanalyses on unstable failures in select mining layouts.
NASA Astrophysics Data System (ADS)
Becker, T. W.
2011-12-01
I present results from ongoing, NSF-CAREER funded educational and research efforts that center around making numerical tools in seismology and geodynamics more accessible to a broader audience. The goal is not only to train students in quantitative, interdisciplinary research, but also to make methods more easily accessible to practitioners across disciplines. I describe the two main efforts that were funded, the Solid Earth Research and Teaching Environment (SEATREE, geosys.usc.edu/projects/seatree/), and a new Numerical Methods class. SEATREE is a modular and user-friendly software framework to facilitate using solid Earth research tools in the undergraduate and graduate classroom and for interdisciplinary, scientific collaboration. We use only open-source software, and most programming is done in the Python computer language. We strive to make use of modern software design and development concepts while remaining compatible with traditional scientific coding and existing, legacy software. Our goals are to provide a fully contained, yet transparent package that lets users operate in an easy, graphically supported "black box" mode, while also allowing to look under the hood, for example to conduct numerous forward models to explore parameter space. SEATREE currently has several implemented modules, including on global mantle flow, 2D phase velocity tomography, and 2D mantle convection and was used at the University of Southern California, Los Angeles, and at a 2010 CIDER summer school tutorial. SEATREE was developed in collaboration with engineering and computer science undergraduate students, some of which have gone on to work in Earth Science projects. In the long run, we envision SEATREE to contribute to new ways of sharing scientific research, and making (numerical) experiments truly reproducible again. The other project is a set of lecture notes and Matlab exercises on Numerical Methods in solid Earth, focusing on finite difference and element methods. The class has been taught several times at USC to a broad audience of Earth science students with very diverse levels of exposure to math and physics. It is our goal to bring everyone up to speed and empower students, and we have seen structural geology students with very little exposure to math go on to construct their own numerical models of pTt-paths in a core-complex setting. This exemplifies the goal of teaching students to both be able to put together simple numerical models from scratch, and, perhaps more importantly, to truly understand the basic concepts, capabilities, and pitfalls of the more powerful community codes that are being increasingly used. SEATREE and the Numerical Methods class material are freely available at geodynamics.usc.edu/~becker.
A software tool for modeling and simulation of numerical P systems.
Buiu, Catalin; Arsene, Octavian; Cipu, Corina; Patrascu, Monica
2011-03-01
A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
Ball Bearing Analysis with the ORBIS Tool
NASA Technical Reports Server (NTRS)
Halpin, Jacob D.
2016-01-01
Ball bearing design is critical to the success of aerospace mechanisms. Key bearing performance parameters, such as load capability, stiffness, torque, and life all depend on accurate determination of the internal load distribution. Hence, a good analytical bearing tool that provides both comprehensive capabilities and reliable results becomes a significant asset to the engineer. This paper introduces the ORBIS bearing tool. A discussion of key modeling assumptions and a technical overview is provided. Numerous validation studies and case studies using the ORBIS tool are presented. All results suggest the ORBIS code closely correlates to predictions on bearing internal load distributions, stiffness, deflection and stresses.
Analogue and numerical modelling in Volcanology: Development, evolution and future challenges
NASA Astrophysics Data System (ADS)
Kavanagh, Janine; Annen, Catherine
2015-04-01
Since the inception of volcanology as a science, analogue modelling has been an important methodology to study the formation and evolution of the volcanic system. With the development of computing capacities numerical modelling has become a widely used tool to explore magmatic process quantitatively and try to predict eruptive behaviour. Processes of interest include the development and establishment of the volcanic plumbing system, the propagation of magma to the surface to feed eruptions, the construction of a volcanic edifice and the dynamics of eruptive processes. An important ultimate aim is to characterise and measure the experimental volcanic and magmatic phenomena, to inform and improve eruption forecasting for hazard assessments. In nature, volcanic activity is often unpredictable and in an environment that is highly changeable and forbidding. Volcanic or magmatic activity cannot be repeated at will and has many (often unconstrained) variables. The processes of interest are frequently hidden from view, for example occurring beneath the Earth's surface or within a pyroclastic flow or plume. The challenges of working in volcanic terrains and gathering 'real' volcano data mean that analogue and numerical models have gained significant importance as a method to study the geometrics, kinematics, and dynamics of volcano growth and eruption. A huge variety of analogue materials have been used in volcanic modelling, often bringing out the more creative side of the scientific mind. As with all models, the choice of appropriate materials and boundary conditions are critical for assessing the relevance and usefulness of the experimental results. Numerical simulation has proved a useful tool to test the physical plausibility of conceptual models and presents the advantage of being applicable at different scales. It is limited however in its predictive power by the number of free parameters needed to describe geological systems. In this special symposium we will attempt to review the use and significance of analogue and numerical modelling in volcanological research over the past century to the present day. We introduce some of the new techniques being developed through a multidisciplinary approach, and offer some perspectives on how these might be used to help shape the direction of future research in volcanology.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions
NASA Astrophysics Data System (ADS)
Barré, J.; Carrillo, J. A.; Degond, P.; Peurichard, D.; Zatorska, E.
2018-02-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions.
Barré, J; Carrillo, J A; Degond, P; Peurichard, D; Zatorska, E
2018-01-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
An approach to achieve progress in spacecraft shielding
NASA Astrophysics Data System (ADS)
Thoma, K.; Schäfer, F.; Hiermaier, S.; Schneider, E.
2004-01-01
Progress in shield design against space debris can be achieved only when a combined approach based on several tools is used. This approach depends on the combined application of advanced numerical methods, specific material models and experimental determination of input parameters for these models. Examples of experimental methods for material characterization are given, covering the range from quasi static to very high strain rates for materials like Nextel and carbon fiber-reinforced materials. Mesh free numerical methods have extraordinary capabilities in the simulation of extreme material behaviour including complete failure with phase changes, combined with shock wave phenomena and the interaction with structural components. In this paper the benefits from combining numerical methods, material modelling and detailed experimental studies for shield design are demonstrated. The following examples are given: (1) Development of a material model for Nextel and Kevlar-Epoxy to enable numerical simulation of hypervelocity impacts on complex heavy protection shields for the International Space Station. (2) The influence of projectile shape on protection performance of Whipple Shields and how experimental problems in accelerating such shapes can be overcome by systematic numerical simulation. (3) The benefits of using metallic foams in "sandwich bumper shields" for spacecraft and how to approach systematic characterization of such materials.
NASA Astrophysics Data System (ADS)
Sateesh Kumar, Ch; Patel, Saroj Kumar; Das, Anshuman
2018-03-01
Temperature generation in cutting tools is one of the major causes of tool failure especially during hard machining where machining forces are quite high resulting in elevated temperatures. Thus, the present work investigates the temperature generation during hard machining of AISI 52100 steel (62 HRC hardness) with uncoated and PVD AlTiN coated Al2O3/TiCN mixed ceramic cutting tools. The experiments were performed on a heavy duty lathe machine with both coated and uncoated cutting tools under dry cutting environment. The temperature of the cutting zone was measured using an infrared thermometer and a finite element model has been adopted to predict the temperature distribution in cutting tools during machining for comparative assessment with the measured temperature. The experimental and numerical results revealed a significant reduction of cutting zone temperature during machining with PVD AlTiN coated cutting tools when compared to uncoated cutting tools during each experimental run. The main reason for decrease in temperature for AlTiN coated tools is the lower coefficient of friction offered by the coating material which allows the free flow of the chips on the rake surface when compared with uncoated cutting tools. Further, the superior wear behaviour of AlTiN coating resulted in reduction of cutting temperature.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2011-06-01
We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.
3PE: A Tool for Estimating Groundwater Flow Vectors
Evaluation of hydraulic gradients and the associated groundwater flow rates and directions is a fundamental aspect of hydrogeologic characterization. Many methods, ranging in complexity from simple three-point solution techniques to complex numerical models of groundwater flow, ...
NASA Astrophysics Data System (ADS)
Kern, Bastian; Jöckel, Patrick
2016-10-01
Numerical climate and weather models have advanced to finer scales, accompanied by large amounts of output data. The model systems hit the input and output (I/O) bottleneck of modern high-performance computing (HPC) systems. We aim to apply diagnostic methods online during the model simulation instead of applying them as a post-processing step to written output data, to reduce the amount of I/O. To include diagnostic tools into the model system, we implemented a standardised, easy-to-use interface based on the Modular Earth Submodel System (MESSy) into the ICOsahedral Non-hydrostatic (ICON) modelling framework. The integration of the diagnostic interface into the model system is briefly described. Furthermore, we present a prototype implementation of an advanced online diagnostic tool for the aggregation of model data onto a user-defined regular coarse grid. This diagnostic tool will be used to reduce the amount of model output in future simulations. Performance tests of the interface and of two different diagnostic tools show, that the interface itself introduces no overhead in form of additional runtime to the model system. The diagnostic tools, however, have significant impact on the model system's runtime. This overhead strongly depends on the characteristics and implementation of the diagnostic tool. A diagnostic tool with high inter-process communication introduces large overhead, whereas the additional runtime of a diagnostic tool without inter-process communication is low. We briefly describe our efforts to reduce the additional runtime from the diagnostic tools, and present a brief analysis of memory consumption. Future work will focus on optimisation of the memory footprint and the I/O operations of the diagnostic interface.
Numerical Model Metrics Tools in Support of Navy Operations
NASA Astrophysics Data System (ADS)
Dykes, J. D.; Fanguy, P.
2017-12-01
Increasing demands of accurate ocean forecasts that are relevant to the Navy mission decision makers demand tools that quickly provide relevant numerical model metrics to the forecasters. Increasing modelling capabilities with ever-higher resolution domains including coupled and ensemble systems as well as the increasing volume of observations and other data sources to which to compare the model output requires more tools for the forecaster to enable doing more with less. These data can be appropriately handled in a geographic information system (GIS) fused together to provide useful information and analyses, and ultimately a better understanding how the pertinent model performs based on ground truth.. Oceanographic measurements like surface elevation, profiles of temperature and salinity, and wave height can all be incorporated into a set of layers correlated to geographic information such as bathymetry and topography. In addition, an automated system that runs concurrently with the models on high performance machines matches routinely available observations to modelled values to form a database of matchups with which statistics can be calculated and displayed, to facilitate validation of forecast state and derived variables. ArcMAP, developed by Environmental Systems Research Institute, is a GIS application used by the Naval Research Laboratory (NRL) and naval operational meteorological and oceanographic centers to analyse the environment in support of a range of Navy missions. For example, acoustic propagation in the ocean is described with a three-dimensional analysis of sound speed that depends on profiles of temperature, pressure and salinity predicted by the Navy Coastal Ocean Model. The data and model output must include geo-referencing information suitable for accurately placing the data within the ArcMAP framework. NRL has developed tools that facilitate merging these geophysical data and their analyses, including intercomparisons between model predictions as well as comparison to validation data. This methodology produces new insights and facilitates identification of potential problems in ocean prediction.
Stream Lifetimes Against Planetary Encounters
NASA Technical Reports Server (NTRS)
Valsecchi, G. B.; Lega, E.; Froeschle, Cl.
2011-01-01
We study, both analytically and numerically, the perturbation induced by an encounter with a planet on a meteoroid stream. Our analytical tool is the extension of pik s theory of close encounters, that we apply to streams described by geocentric variables. The resulting formulae are used to compute the rate at which a stream is dispersed by planetary encounters into the sporadic background. We have verified the accuracy of the analytical model using a numerical test.
Improving the physiological realism of experimental models.
Vinnakota, Kalyan C; Cha, Chae Y; Rorsman, Patrik; Balaban, Robert S; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A; Jeneson, Jeroen A L
2016-04-06
The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease.
On the bistable zone of milling processes
Dombovari, Zoltan; Stepan, Gabor
2015-01-01
A modal-based model of milling machine tools subjected to time-periodic nonlinear cutting forces is introduced. The model describes the phenomenon of bistability for certain cutting parameters. In engineering, these parameter domains are referred to as unsafe zones, where steady-state milling may switch to chatter for certain perturbations. In mathematical terms, these are the parameter domains where the periodic solution of the corresponding nonlinear, time-periodic delay differential equation is linearly stable, but its domain of attraction is limited due to the existence of an unstable quasi-periodic solution emerging from a secondary Hopf bifurcation. A semi-numerical method is presented to identify the borders of these bistable zones by tracking the motion of the milling tool edges as they might leave the surface of the workpiece during the cutting operation. This requires the tracking of unstable quasi-periodic solutions and the checking of their grazing to a time-periodic switching surface in the infinite-dimensional phase space. As the parameters of the linear structural behaviour of the tool/machine tool system can be obtained by means of standard modal testing, the developed numerical algorithm provides efficient support for the design of milling processes with quick estimates of those parameter domains where chatter can still appear in spite of setting the parameters into linearly stable domains. PMID:26303918
Pivel, María Alejandra Gómez; Dal Sasso Freitas, Carla Maria
2010-08-01
Numerical models that predict the fate of drilling discharges at sea constitute a valuable tool for both the oil industry and regulatory agencies. In order to provide reliable estimates, models must be validated through the comparison of predictions with field or laboratory observations. In this paper, we used the Offshore Operators Committee Model to simulate the discharges from two wells drilled at Campos Basin, offshore SE Brazil, and compared the results with field observations obtained 3 months after drilling. The comparison showed that the model provided reasonable predictions, considering that data about currents were reconstructed and theoretical data were used to characterize the classes of solids. The model proved to be a valuable tool to determine the degree of potential impact associated to drilling activities. However, since the accuracy of the model is directly dependent on the quality of input data, different possible scenarios should be considered when used for forecast modeling.
Numerical Weather Prediction Models on Linux Boxes as tools in meteorological education in Hungary
NASA Astrophysics Data System (ADS)
Gyongyosi, A. Z.; Andre, K.; Salavec, P.; Horanyi, A.; Szepszo, G.; Mille, M.; Tasnadi, P.; Weidiger, T.
2012-04-01
Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree (BSc, MSc and PhD). The three year long base BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. BasicsFundamentals in Mathematics (Calculus), Physics (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at our the Eötvös Loránd uUniversity in the our country. Our aim is to give a basic education in all fields of Meteorology. Main topics are: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, modeling Modeling of surfaceSurface-atmosphere Iinteractions and Cclimate change. Education is performed in two branches: Climate Researcher and Forecaster. Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree. The three year long BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. Fundamentals in Mathematics (Calculus), (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at the Eötvös Loránd University in our country. Our aim is to give a basic education in all fields of Meteorology: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, Modeling of Surface-atmosphere Interactions and Climate change. Education is performed in two branches: Climate Researcher and Forecaster. Numerical modeling became a common tool in the daily practice of weather experts forecasters due to the i) increasing user demands for weather data by the costumers, ii) the growth in computer resources, iii) numerical weather prediction systems available for integration on affordable, off the shelf computers and iv) available input data (from ECMWF or NCEP) for model integrations. Beside learning the theoretical basis, since the last year. Students in their MSc or BSc Thesis Research or in Student's Research ProjectsStudent's Research Projects h have the opportunity to run numerical models and to analyze the outputs for different purposes including wind energy estimation, simulation of the dynamics of a polar low, and subtropical cyclones, analysis of the isentropic potential vorticity field, examination of coupled atmospheric dispersion models, etc. A special course in the application of numerical modeling has been held (is being announced for the upcoming semester) (is being announced for the upcoming semester) for our students in order to improve their skills on this field. Several numerical model (NRIPR ETA and WRF) systems have been adapted in the University and integrated WRF have been tested and used for the geographical region of the Carpathian Basin (NRIPR, ETA and WRF). Recently ALADIN/CHAPEAU the academic version of the ARPEGE ALADIN cy33t1 meso-scale numerical weather prediction model system (which is the operational forecasting tool of our National Weather Service) has been installed at our Institute. ALADIN is the operational forecasting model of the Hungarian Meteorological Service and developed in the framework of the international ALADIN co-operation. Our main objectives are i) the analysis of different typical weather situations, ii) fine tuning of parameterization schemes and the iii) comparison of the ALADIN/CHAPEAU and WRF model outputs based on case studies. The necessary hardware and software innovations has have been done. In the presentation the computer resources needed for the integration of both WRF and ALADIN/CHAPEAU models will be briefly described. The software developments performed for the evaluation and comparison of the different modeling systems will be demonstrated. The main objectives of the education program on the practical numerical weather modeling will be introduced, as well as its detailed thematics and the structure of the labs.
Agent Based Modeling of Collaboration and Work Practices Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Acquisti, Alessandro; Sierhuis, Maarten; Clancey, William J.; Bradshaw, Jeffrey M.; Shaffo, Mike (Technical Monitor)
2002-01-01
The International Space Station is one the most complex projects ever, with numerous interdependent constraints affecting productivity and crew safety. This requires planning years before crew expeditions, and the use of sophisticated scheduling tools. Human work practices, however, are difficult to study and represent within traditional planning tools. We present an agent-based model and simulation of the activities and work practices of astronauts onboard the ISS based on an agent-oriented approach. The model represents 'a day in the life' of the ISS crew and is developed in Brahms, an agent-oriented, activity-based language used to model knowledge in situated action and learning in human activities.
Khan, Niaz Bahadur; Ibrahim, Zainah; Nguyen, Linh Tuan The; Javed, Muhammad Faisal; Jameel, Mohammed
2017-01-01
This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier-Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear-stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV.
Development of Numerical Tools for the Investigation of Plasma Detachment from Magnetic Nozzles
NASA Technical Reports Server (NTRS)
Sankaran, Kamesh; Polzin, Kurt A.
2007-01-01
A multidimensional numerical simulation framework aimed at investigating the process of plasma detachment from a magnetic nozzle is introduced. An existing numerical code based on a magnetohydrodynamic formulation of the plasma flow equations that accounts for various dispersive and dissipative processes in plasmas was significantly enhanced to allow for the modeling of axisymmetric domains containing three.dimensiunai momentum and magnetic flux vectors. A separate magnetostatic solver was used to simulate the applied magnetic field topologies found in various nozzle experiments. Numerical results from a magnetic diffusion test problem in which all three components of the magnetic field were present exhibit excellent quantitative agreement with the analytical solution, and the lack of numerical instabilities due to fluctuations in the value of del(raised dot)B indicate that the conservative MHD framework with dissipative effects is well-suited for multi-dimensional analysis of magnetic nozzles. Further studies will focus on modeling literature experiments both for the purpose of code validation and to extract physical insight regarding the mechanisms driving detachment.
Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport
NASA Technical Reports Server (NTRS)
Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.
2008-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Modeling tidal hydrodynamics of San Diego Bay, California
Wang, P.-F.; Cheng, R.T.; Richter, K.; Gross, E.S.; Sutton, D.; Gartner, J.W.
1998-01-01
In 1983, current data were collected by the National Oceanic and Atmospheric Administration using mechanical current meters. During 1992 through 1996, acoustic Doppler current profilers as well as mechanical current meters and tide gauges were used. These measurements not only document tides and tidal currents in San Diego Bay, but also provide independent data sets for model calibration and verification. A high resolution (100-m grid), depth-averaged, numerical hydrodynamic model has been implemented for San Diego Bay to describe essential tidal hydrodynamic processes in the bay. The model is calibrated using the 1983 data set and verified using the more recent 1992-1996 data. Discrepancies between model predictions and field data in beth model calibration and verification are on the order of the magnitude of uncertainties in the field data. The calibrated and verified numerical model has been used to quantify residence time and dilution and flushing of contaminant effluent into San Diego Bay. Furthermore, the numerical model has become an important research tool in ongoing hydrodynamic and water quality studies and in guiding future field data collection programs.
Ledzewicz, Urszula; Schättler, Heinz
2017-08-10
Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
NASA Astrophysics Data System (ADS)
San Juan, M.; de la Iglesia, J. M.; Martín, O.; Santos, F. J.
2009-11-01
In despite of the important progresses achieved in the knowledge of cutting processes, the study of certain aspects has undergone the very limitations of the experimental means: temperature gradients, frictions, contact, etc… Therefore, the development of numerical models is a valid tool as a first approach to study of those problems. In the present work, a calculation model under Abaqus Explicit code is developed to represent the orthogonal cutting of AISI 4140 steel. A bidimensional simulation under plane strain conditions, which is considered as adiabatic due to the high speed of the material flow, is chosen. The chip separation is defined by means of a fracture law that allows complex simulations of tool penetration in the workpiece. The strong influence of friction on cutting is proved, therefore a very good definition of materials behaviour laws could be obtained, but an erroneous value of friction coefficient could notably reduce the reliability. Considering the difficulty of checking the friction models used in the simulation, from the tests carried out habitually, the most efficacious way to characterize the friction would be to combine simulation models with cutting tests.
Prediction of blood pressure and blood flow in stenosed renal arteries using CFD
NASA Astrophysics Data System (ADS)
Jhunjhunwala, Pooja; Padole, P. M.; Thombre, S. B.; Sane, Atul
2018-04-01
In the present work an attempt is made to develop a diagnostive tool for renal artery stenosis (RAS) which is inexpensive and in-vitro. To analyse the effects of increase in the degree of severity of stenosis on hypertension and blood flow, haemodynamic parameters are studied by performing numerical simulations. A total of 16 stenosed models with varying degree of stenosis severity from 0-97.11% are assessed numerically. Blood is modelled as a shear-thinning, non-Newtonian fluid using the Carreau model. Computational Fluid Dynamics (CFD) analysis is carried out to compute the values of flow parameters like maximum velocity and maximum pressure attained by blood due to stenosis under pulsatile flow. These values are further used to compute the increase in blood pressure and decrease in available blood flow to kidney. The computed available blood flow and secondary hypertension for varying extent of stenosis are mapped by curve fitting technique using MATLAB and a mathematical model is developed. Based on these mathematical models, a quantification tool is developed for tentative prediction of probable availability of blood flow to the kidney and severity of stenosis if secondary hypertension is known.
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
What is the philosophy of modelling soil moisture movement?
NASA Astrophysics Data System (ADS)
Chen, J.; Wu, Y.
2009-12-01
In laboratory, the soil moisture movement in the different soil textures has been analysed. From field investigation, at a spot, the soil moisture movement in the root zone, vadose zone and shallow aquifer has been explored. In addition, on ground slopes, the interflow in the near surface soil layers has been studied. Along the regions near river reaches, the expansion and shrink of the saturated area due to rainfall occurrences have been observed. From those previous explorations regarding soil moisture movement, numerical models to represent this hydrologic process have been developed. However, generally, due to high heterogeneity and stratification of soil in a basin, modelling soil moisture movement is rather challenging. Normally, some empirical equations or artificial manipulation are employed to adjust the soil moisture movement in various numerical models. In this study, we inspect the soil moisture movement equations used in a watershed model, SWAT (Soil and Water Assessment Tool) (Neitsch et al., 2005), to examine the limitations of our knowledge in such a hydrologic process. Then, we adopt the features of a topographic-information based on a hydrologic model, TOPMODEL (Beven and Kirkby, 1979), to enhance the representation of soil moisture movement in SWAT. Basically, the results of the study reveal, to some extent, the philosophy of modelling soil moisture movement in numerical models, which will be presented in the conference. Beven, K.J. and Kirkby, M.J., 1979. A physically based variable contributing area model of basin hydrology. Hydrol. Science Bulletin, 24: 43-69. Neitsch, S.L., Arnold, J.G., Kiniry, J.R., Williams, J.R. and King, K.W., 2005. Soil and Water Assessment Tool Theoretical Documentation, Grassland, soil and research service, Temple, TX.
Spectral-element Method for 3D Marine Controlled-source EM Modeling
NASA Astrophysics Data System (ADS)
Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.
2017-12-01
As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).
Alisa A. Wade; Kevin S. McKelvey; Michael K. Schwartz
2015-01-01
Resistance-surface-based connectivity modeling has become a widespread tool for conservation planning. The current ease with which connectivity models can be created, however, masks the numerous untested assumptions underlying both the rules that produce the resistance surface and the algorithms used to locate low-cost paths across the target landscape. Here we present...
ERIC Educational Resources Information Center
Johnson, Michael D.; Diwakaran, Ram Prasad
2011-01-01
Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…
compuGUT: An in silico platform for simulating intestinal fermentation
NASA Astrophysics Data System (ADS)
Moorthy, Arun S.; Eberl, Hermann J.
The microbiota inhabiting the colon and its effect on health is a topic of significant interest. In this paper, we describe the compuGUT - a simulation tool developed to assist in exploring interactions between intestinal microbiota and their environment. The primary numerical machinery is implemented in C, and the accessory scripts for loading and visualization are prepared in bash (LINUX) and R. SUNDIALS libraries are employed for numerical integration, and googleVis API for interactive visualization. Supplementary material includes a concise description of the underlying mathematical model, and detailed characterization of numerical errors and computing times associated with implementation parameters.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal
NASA Astrophysics Data System (ADS)
Bloxom, Andrew L.
Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.
On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12
ERIC Educational Resources Information Center
Antal, Tamás
2007-01-01
Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…
Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.
Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F
2013-10-01
A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).
Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation
NASA Astrophysics Data System (ADS)
Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred
2005-08-01
In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.
One-Dimensional Modelling of Internal Ballistics
NASA Astrophysics Data System (ADS)
Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.
2017-10-01
A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.
Center for Extended Magnetohydrodynamics Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, Jesus
This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification ofmore » the numerical codes. This activity was funded for twelve years.« less
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
NASA Astrophysics Data System (ADS)
Picot-Colbeaux, Géraldine; Devau, Nicolas; Thiéry, Dominique; Pettenati, Marie; Surdyk, Nicolas; Parmentier, Marc; Amraoui, Nadia; Crastes de Paulet, François; André, Laurent
2016-04-01
Chalk aquifer is the main water resource for domestic water supply in many parts in northern France. In same basin, groundwater is frequently affected by quality problems concerning nitrates. Often close to or above the drinking water standards, nitrate concentration in groundwater is mainly due to historical agriculture practices, combined with leakage and aquifer recharge through the vadose zone. The complexity of processes occurring into such an environment leads to take into account a lot of knowledge on agronomy, geochemistry and hydrogeology in order to understand, model and predict the spatiotemporal evolution of nitrate content and provide a decision support tool for the water producers and stakeholders. To succeed in this challenge, conceptual and numerical models representing accurately the Chalk aquifer specificity need to be developed. A multidisciplinary approach is developed to simulate storage and transport from the ground surface until groundwater. This involves a new agronomic module "NITRATE" (NItrogen TRansfer for Arable soil to groundwaTEr), a soil-crop model allowing to calculate nitrogen mass balance in arable soil, and the "PHREEQC" numerical code for geochemical calculations, both coupled with the 3D transient groundwater numerical code "MARTHE". Otherwise, new development achieved on MARTHE code allows the use of dual porosity and permeability calculations needed in the fissured Chalk aquifer context. This method concerning the integration of existing multi-disciplinary tools is a real challenge to reduce the number of parameters by selecting the relevant equations and simplifying the equations without altering the signal. The robustness and the validity of these numerical developments are tested step by step with several simulations constrained by climate forcing, land use and nitrogen inputs over several decades. In the first time, simulations are performed in a 1D vertical unsaturated soil column for representing experimental nitrates vertical soil profiles (0-30m depth experimental measurements in Somme region). In the second time, this approach is used to simulate with a 3D model a drinking water catchment area in order to compared nitrate contents time series calculated and measured in the domestic water pumping well since 1995 (field in northern France - Avre Basin region). This numerical tool will help the decision-making in all activities in relation with water uses.
Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már
2014-08-01
A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Numerical modeling of friction welding of bi-metal joints for electrical applications
NASA Astrophysics Data System (ADS)
Velu, P. Shenbaga; Hynes, N. Rajesh Jesudoss
2018-05-01
In the manufacturing industries, and more especially in electrical engineering applications, the usage of non-ferrous materials plays a vital role. Today's engineering applications relies upon some of the significant properties such as a good corrosion resistance, mechanical properties, good heat conductivity and higher electrical conductivity. Copper-aluminum bi-metal joint is one such combination that meets the demands requirements for electrical applications. In this work, the numerical simulation of AA 6061 T6 alloy/Copper was carried out under joining conditions. By using this developed model, the temperature distribution along the length of the dissimilar joint is predicted and the time-temperature profile has also been generated. Besides, a Finite Element Model has been developed by using the numerical simulation Tool "ABAQUS". This developed FEM is helpful in predicting various output parameters during friction welding of this dissimilar joint combination.
An Evaluation Tool for CONUS-Scale Estimates of Components of the Water Balance
NASA Astrophysics Data System (ADS)
Saxe, S.; Hay, L.; Farmer, W. H.; Markstrom, S. L.; Kiang, J. E.
2016-12-01
Numerous research groups are independently developing data products to represent various components of the water balance (e.g. runoff, evapotranspiration, recharge, snow water equivalent, soil moisture, and climate) at the scale of the conterminous United States. These data products are derived from a range of sources, including direct measurement, remotely-sensed measurement, and statistical and deterministic model simulations. An evaluation tool is needed to compare these data products and the components of the water balance they contain in order to identify the gaps in the understanding and representation of continental-scale hydrologic processes. An ideal tool will be an objective, universally agreed upon, framework to address questions related to closing the water balance. This type of generic, model agnostic evaluation tool would facilitate collaboration amongst different hydrologic research groups and improve modeling capabilities with respect to continental-scale water resources. By adopting a comprehensive framework to consider hydrologic modeling in the context of a complete water balance, it is possible to identify weaknesses in process modeling, data product representation and regional hydrologic variation. As part of its National Water Census initiative, the U.S. Geological survey is facilitating this dialogue to developing prototype evaluation tools.
Giving students the run of sprinting models
NASA Astrophysics Data System (ADS)
Heck, André; Ellermeijer, Ton
2009-11-01
A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.
SuperLFV: An SLHA tool for lepton flavor violating observables in supersymmetric models
NASA Astrophysics Data System (ADS)
Murakami, Brandon
2014-02-01
We introduce SuperLFV, a numerical tool for calculating low-energy observables that exhibit charged lepton flavor violation (LFV) in the context of the minimal supersymmetric standard model (MSSM). As the Large Hadron Collider and MEG, a dedicated μ+→e+γ experiment, are presently acquiring data, there is need for tools that provide rapid discrimination of models that exhibit LFV. SuperLFV accepts a spectrum file compliant with the SUSY Les Houches Accord (SLHA), containing the MSSM couplings and masses with complex phases at the supersymmetry breaking scale. In this manner, SuperLFV is compatible with but divorced from existing SLHA spectrum calculators that provide the low energy spectrum. Hence, input spectra are not confined to the LFV sources provided by established SLHA spectrum calculators. Input spectra may be generated by personal code or by hand, allowing for arbitrary models not supported by existing spectrum calculators.
NASA Astrophysics Data System (ADS)
Robins, N. S.; Rutter, H. K.; Dumpleton, S.; Peach, D. W.
2005-01-01
Groundwater investigation has long depended on the process of developing a conceptual flow model as a precursor to developing a mathematical model, which in turn may lead in complex aquifers to the development of a numerical approximation model. The assumptions made in the development of the conceptual model depend heavily on the geological framework defining the aquifer, and if the conceptual model is inappropriate then subsequent modelling will also be incorrect. Paradoxically, the development of a robust conceptual model remains difficult, not least because this 3D paradigm is usually reduced to 2D plans and sections. 3D visualisation software is now available to facilitate the development of the conceptual model, to make the model more robust and defensible and to assist in demonstrating the hydraulics of the aquifer system. Case studies are presented to demonstrate the role and cost-effectiveness of the visualisation process.
NASA Astrophysics Data System (ADS)
Nobis, M.; Stücke, P.; Schmidt, M.; Riedel, M.
2013-04-01
The laser-optical research of the flow inside the lubricating gap of a journal bearing model is one important task in a larger overall project. The long-term objective is the development of an easy-to-work calculation tool which delivers information about the causes and consequences of cavitation processes in hydrodynamically lubricated journal bearings. Hence, it will be possible to find statements for advantageous and disadvantageous geometrical shapes of the bushings. In conclusion such a calculation tool can provide important insights for the construction and design of future journal bearings. Current design programs are based on a two-dimensional approach for the lubricating gap. The first dimension is the breath of the bearing and the second dimension is the circumferential direction of the bearing. The third dimension, the expansion of the gap in radial direction, will be neglected. Instead of an exact resolution of the flow pattern inside the gap, turbulence models are in use. Past studies on numerical and experimental field have shown that inside the lubricating gap clearly organized and predominantly laminar flow structures can be found. Thus, for a detailed analysis of the reasons and effects of cavitation bubbles, a three-dimensional resolution of the lubricating gap is inevitable. In addition to the qualitative evaluation of the flow with visualization experiments it is possible to perform angle-based velocity measurements inside the gap with the help of a triggered Laser-Doppler- Velocimeter (LDV). The results of these measurements are used to validate three-dimensional CFD flow simulations, and to optimize the numerical mesh structure and the boundary conditions. This paper will present the experimental setup of the bearing model, some exemplary results of the visualization experiments and LDV measurements as well as a comparison between experimental and numerical results.
Numerical Simulation of Transient Liquid Phase Bonding under Temperature Gradient
NASA Astrophysics Data System (ADS)
Ghobadi Bigvand, Arian
Transient Liquid Phase bonding under Temperature Gradient (TG-TLP bonding) is a relatively new process of TLP diffusion bonding family for joining difficult-to-weld aerospace materials. Earlier studies have suggested that in contrast to the conventional TLP bonding process, liquid state diffusion drives joint solidification in TG-TLP bonding process. In the present work, a mass conservative numerical model that considers asymmetry in joint solidification is developed using finite element method to properly study the TG-TLP bonding process. The numerical results, which are experimentally verified, show that unlike what has been previously reported, solid state diffusion plays a major role in controlling the solidification behavior during TG-TLP bonding process. The newly developed model provides a vital tool for further elucidation of the TG-TLP bonding process.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
FY15 Report on Thermomechanical Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Francis D.; Buchholz, Stuart
2015-08-01
Sandia is participating in the third phase of a United States (US)-German Joint Project that compares constitutive models and simulation procedures on the basis of model calculations of the thermomechanical behavior and healing of rock salt (Salzer et al. 2015). The first goal of the project is to evaluate the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Among the numerical modeling tools required to address this are constitutive models that are used in computer simulations for the description of the thermal, mechanical, and hydraulic behavior of the host rockmore » under various influences and for the long-term prediction of this behavior. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure disposal of radioactive wastes in rock salt. Results of the Joint Project may ultimately be used to make various assertions regarding stability analysis of an underground repository in salt during the operating phase as well as long-term integrity of the geological barrier in the post-operating phase A primary evaluation of constitutive model capabilities comes by way of predicting large-scale field tests. The Joint Project partners decided to model Waste Isolation Pilot Plant (WIPP) Rooms B & D which are full-scale rooms having the same dimensions. Room D deformed under natural, ambient conditions while Room B was thermally driven by an array of waste-simulating heaters (Munson et al. 1988; 1990). Existing laboratory test data for WIPP salt were carefully scrutinized and the partners decided that additional testing would be needed to help evaluate advanced features of the constitutive models. The German partners performed over 140 laboratory tests on WIPP salt at no charge to the US Department of Energy (DOE).« less
The Oceanographic Multipurpose Software Environment (OMUSE v1.0)
NASA Astrophysics Data System (ADS)
Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk
2017-08-01
In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.
Uncertainty Modeling for Robustness Analysis of Control Upset Prevention and Recovery Systems
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Khong, Thuan H.; Shin, Jong-Yeob; Kwatny, Harry; Chang, Bor-Chin; Balas, Gary J.
2005-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems (developed for failure detection, identification, and reconfiguration, as well as upset recovery) need to be evaluated over broad regions of the flight envelope and under extreme flight conditions, and should include various sources of uncertainty. However, formulation of linear fractional transformation (LFT) models for representing system uncertainty can be very difficult for complex parameter-dependent systems. This paper describes a preliminary LFT modeling software tool which uses a matrix-based computational approach that can be directly applied to parametric uncertainty problems involving multivariate matrix polynomial dependencies. Several examples are presented (including an F-16 at an extreme flight condition, a missile model, and a generic example with numerous crossproduct terms), and comparisons are given with other LFT modeling tools that are currently available. The LFT modeling method and preliminary software tool presented in this paper are shown to compare favorably with these methods.
Free and Open Source GIS Tools: Role and Relevance in the Environmental Assessment Community
The presence of an explicit geographical context in most environmental decisions can complicate assessment and selection of management options. These decisions typically involve numerous data sources, complex environmental and ecological processes and their associated models, ris...
Improving the physiological realism of experimental models
Vinnakota, Kalyan C.; Cha, Chae Y.; Rorsman, Patrik; Balaban, Robert S.; La Gerche, Andre; Wade-Martins, Richard; Beard, Daniel A.
2016-01-01
The Virtual Physiological Human (VPH) project aims to develop integrative, explanatory and predictive computational models (C-Models) as numerical investigational tools to study disease, identify and design effective therapies and provide an in silico platform for drug screening. Ultimately, these models rely on the analysis and integration of experimental data. As such, the success of VPH depends on the availability of physiologically realistic experimental models (E-Models) of human organ function that can be parametrized to test the numerical models. Here, the current state of suitable E-models, ranging from in vitro non-human cell organelles to in vivo human organ systems, is discussed. Specifically, challenges and recent progress in improving the physiological realism of E-models that may benefit the VPH project are highlighted and discussed using examples from the field of research on cardiovascular disease, musculoskeletal disorders, diabetes and Parkinson's disease. PMID:27051507
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Eiben, Bjoern; Hipwell, John H.; Williams, Norman R.; Keshtgar, Mo; Hawkes, David J.
2016-01-01
Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815
Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub
2009-08-01
Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.
Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T
2018-03-14
Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct construction of discrete numerical evolution equations, where the mesh size and time step will play dynamic roles in the modeling process. With the variation of the ratio between mesh size and local particle mean free path, the scheme will capture flow physics from the kinetic particle transport and collision to the hydrodynamic wave propagation. Based on the direct modeling, a continuous dynamics of flow motion will be captured in the unified gas-kinetic scheme. This scheme can be faithfully used to study the unexplored non-equilibrium flow physics in the transition regime.
NASA Astrophysics Data System (ADS)
Junker, Philipp; Hackl, Klaus
2016-09-01
Numerical simulations are a powerful tool to analyze the complex thermo-mechanically coupled material behavior of shape memory alloys during product engineering. The benefit of the simulations strongly depends on the quality of the underlying material model. In this contribution, we discuss a variational approach which is based solely on energetic considerations and demonstrate that unique calibration of such a model is sufficient to predict the material behavior at varying ambient temperature. In the beginning, we recall the necessary equations of the material model and explain the fundamental idea. Afterwards, we focus on the numerical implementation and provide all information that is needed for programing. Then, we show two different ways to calibrate the model and discuss the results. Furthermore, we show how this model is used during real-life industrial product engineering.
NASA Astrophysics Data System (ADS)
Wichmann, Volker
2017-09-01
The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.
Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements
NASA Astrophysics Data System (ADS)
Kim, Sang-Koog
2010-07-01
Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.
Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation
NASA Astrophysics Data System (ADS)
Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David
2017-05-01
Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï
that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding, which were verified against those recorded after the event by local authorities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael S. Bruno
This report summarizes the research efforts on the DOE supported research project Percussion Drilling (DE-FC26-03NT41999), which is to significantly advance the fundamental understandings of the physical mechanisms involved in combined percussion and rotary drilling, and thereby facilitate more efficient and lower cost drilling and exploration of hard-rock reservoirs. The project has been divided into multiple tasks: literature reviews, analytical and numerical modeling, full scale laboratory testing and model validation, and final report delivery. Literature reviews document the history, pros and cons, and rock failure physics of percussion drilling in oil and gas industries. Based on the current understandings, a conceptualmore » drilling model is proposed for modeling efforts. Both analytical and numerical approaches are deployed to investigate drilling processes such as drillbit penetration with compression, rotation and percussion, rock response with stress propagation, damage accumulation and failure, and debris transportation inside the annulus after disintegrated from rock. For rock mechanics modeling, a dynamic numerical tool has been developed to describe rock damage and failure, including rock crushing by compressive bit load, rock fracturing by both shearing and tensile forces, and rock weakening by repetitive compression-tension loading. Besides multiple failure criteria, the tool also includes a damping algorithm to dissipate oscillation energy and a fatigue/damage algorithm to update rock properties during each impact. From the model, Rate of Penetration (ROP) and rock failure history can be estimated. For cuttings transport in annulus, a 3D numerical particle flowing model has been developed with aid of analytical approaches. The tool can simulate cuttings movement at particle scale under laminar or turbulent fluid flow conditions and evaluate the efficiency of cutting removal. To calibrate the modeling efforts, a series of full-scale fluid hammer drilling tests, as well as single impact tests, have been designed and executed. Both Berea sandstone and Mancos shale samples are used. In single impact tests, three impacts are sequentially loaded at the same rock location to investigate rock response to repetitive loadings. The crater depth and width are measured as well as the displacement and force in the rod and the force in the rock. Various pressure differences across the rock-indentor interface (i.e. bore pressure minus pore pressure) are used to investigate the pressure effect on rock penetration. For hammer drilling tests, an industrial fluid hammer is used to drill under both underbalanced and overbalanced conditions. Besides calibrating the modeling tool, the data and cuttings collected from the tests indicate several other important applications. For example, different rock penetrations during single impact tests may reveal why a fluid hammer behaves differently with diverse rock types and under various pressure conditions at the hole bottom. On the other hand, the shape of the cuttings from fluid hammer tests, comparing to those from traditional rotary drilling methods, may help to identify the dominant failure mechanism that percussion drilling relies on. If so, encouraging such a failure mechanism may improve hammer performance. The project is summarized in this report. Instead of compiling the information contained in the previous quarterly or other technical reports, this report focuses on the descriptions of tasks, findings, and conclusions, as well as the efforts on promoting percussion drilling technologies to industries including site visits, presentations, and publications. As a part of the final deliveries, the 3D numerical model for rock mechanics is also attached.« less
Numerical models to evaluate the temperature increase induced by ex vivo microwave thermal ablation.
Cavagnaro, M; Pinto, R; Lopresto, V
2015-04-21
Microwave thermal ablation (MTA) therapies exploit the local absorption of an electromagnetic field at microwave (MW) frequencies to destroy unhealthy tissue, by way of a very high temperature increase (about 60 °C or higher). To develop reliable interventional protocols, numerical tools able to correctly foresee the temperature increase obtained in the tissue would be very useful. In this work, different numerical models of the dielectric and thermal property changes with temperature were investigated, looking at the simulated temperature increments and at the size of the achievable zone of ablation. To assess the numerical data, measurement of the temperature increases close to a MTA antenna were performed in correspondence with the antenna feed-point and the antenna cooling system, for increasing values of the radiated power. Results show that models not including the changes of the dielectric and thermal properties can be used only for very low values of the power radiated by the antenna, whereas a good agreement with the experimental values can be obtained up to 20 W if water vaporization is included in the numerical model. Finally, for higher power values, a simulation that dynamically includes the tissue's dielectric and thermal property changes with the temperature should be performed.
NASA Technical Reports Server (NTRS)
Masiulaniec, Konstanty C.
1988-01-01
The ability to predict the time-temperature history of electrothermal de-icer pads is important in the subsequent design of improved and more efficient versions. These de-icer pads are installed near the surface of aircraft components, for the specific purpose of removing accreted ice. The proposed numerical model can incorporate the full 2-D geometry through a section of a region (i.e., section of an airfoil), that current 1-D numerical codes are unable to do. Thus, the effects of irregular layers, curvature, etc., can now be accounted for in the thermal transients. Each layer in the actual geometry is mapped via a body-fitted coordinate transformation into uniform, rectangular computational grids. The relevant heat transfer equations are transformed and discretized. To model the phase change that might occur in any accreted ice, in an enthalpy formulation the phase change equations are likewise transformed and discretized. The code developed was tested against numerous classical numerical solutions, as well as against experimental de-icing data on a UH1H rotor blade obtained from the NASA Lewis Research Center. The excellent comparisons obtained show that this code can be a useful tool in predicting the performance of current de-icer models, as well as in the designing of future models.
NASA Astrophysics Data System (ADS)
Bermudez, A.; Rivas, D.
2015-12-01
Phytoplankton bloom dynamics depends on the interactions of favorable physical, chemical, and biotic conditions, particularly on the available nutrients that enhance phytoplankton growth, like nitrogen. Costal and estuarine environments are heavily influenced by exogenous sources of nitrogen; the anthropogenic inputs include urban and rural wastewater coming from agricultural activities (i.e., fertilizers and animal waste). In response, new production is often enhanced, leading eutrophication and phytoplankton blooms, including harmful taxa. These events have become more frequent, and with it the interest to evaluate their effects on marine ecosystems and the impact on human health. In the Gulf of California the harmful algal blooms (HABs) had affected aquaculture, fisheries, and even tourism, thereby it is important to generate information about biological and physical factors that can influence their appearance. A numerical model is a tool that may bring key information about the origin and distribution of phytoplankton blooms. Herein the analysis is based on a three-dimensional, hydrodynamical numerical model, coupled to a Nitrogen-Phytoplankton-Zooplankton-Detritus (NPZD) model. Several numerical simulations using different forcing and scenarios are carried out in order to evaluate the processes that influence the phytoplankton growth. These numerical results are compared to available observations. Thus, the main environmental factors triggering the generation of HABs can be identified.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-03-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
NASA Astrophysics Data System (ADS)
Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil
2018-06-01
The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.
Walters, William J; Christensen, Villy
2018-01-01
Ecotracer is a tool in the Ecopath with Ecosim (EwE) software package used to simulate and analyze the transport of contaminants such as methylmercury or radiocesium through aquatic food webs. Ecotracer solves the contaminant dynamic equations simultaneously with the biomass dynamic equations in Ecosim/Ecospace. In this paper, we give a detailed description of the Ecotracer module and analyze the performance on two problems of differing complexity. Ecotracer was modified from previous versions to more accurately model contaminant excretion, and new numerical integration algorithms were implemented to increase accuracy and robustness. To test the mathematical robustness of the computational algorithm, Ecotracer was tested on a simple problem for which we know an analytical solution. These results demonstrated the effectiveness of the program numerics. A much more complex model, the release of the cesium radionuclide 137 Cs from the Fukushima Dai-ichi nuclear accident, was also modeled and analyzed. A comparison of the Ecotracer results to sampled 137 Cs measurements in the coastal ocean area around Fukushima show the promise of the tool but also highlight some important limitations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling Kelvin Wave Cascades in Superfluid Helium
NASA Astrophysics Data System (ADS)
Boffetta, G.; Celani, A.; Dezzani, D.; Laurie, J.; Nazarenko, S.
2009-09-01
We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the later and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in k-space via using a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and closeness of numerics for the higher-order DAM to the analytical predictions for the lower-order DAM.
Using Interactive Visualization to Analyze Solid Earth Data and Geodynamics Models
NASA Astrophysics Data System (ADS)
Kellogg, L. H.; Kreylos, O.; Billen, M. I.; Hamann, B.; Jadamec, M. A.; Rundle, J. B.; van Aalsburg, J.; Yikilmaz, M. B.
2008-12-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. Major projects such as EarthScope and GeoEarthScope are producing the data needed to characterize the structure and kinematics of Earth's surface and interior at unprecedented resolution. At the same time, high-performance computing enables high-precision and fine- detail simulation of geodynamics processes, complementing the observational data. To facilitate interpretation and analysis of these datasets, to evaluate models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. VR has traditionally been used primarily as a presentation tool allowing active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for accelerated scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. Our approach to VR takes advantage of the specialized skills of geoscientists who are trained to interpret geological and geophysical data generated from field observations. Interactive tools allow the scientist to explore and interpret geodynamic models, tomographic models, and topographic observations, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulations or field observations. The use of VR technology enables us to improve our interpretation of crust and mantle structure and of geodynamical processes. Mapping tools based on computer visualization allow virtual "field studies" in inaccessible regions, and an interactive tool allows us to construct digital fault models for use in numerical models. Using the interactive tools on a high-end platform such as an immersive virtual reality room known as a Cave Automatic Virtual Environment (CAVE), enables the scientist to stand in data three-dimensional dataset while taking measurements. The CAVE involves three or more projection surfaces arranged as walls in a room. Stereo projectors combined with a motion tracking system and immersion recreates the experience of carrying out research in the field. This high-end system provides significant advantages for scientists working with complex volumetric data. The interactive tools also work on low-cost platforms that provide stereo views and the potential for interactivity such as a Geowall or a 3D enabled TV. The Geowall is also a well-established tool for education, and in combination with the tools we have developed, enables the rapid transfer of research data and new knowledge to the classroom. The interactive visualization tools can also be used on a desktop or laptop with or without stereo capability. Further information about the Virtual Reality User Interface (VRUI), the 3DVisualizer, the Virtual mapping tools, and the LIDAR viewer, can be found on the KeckCAVES website, www.keckcaves.org.
Isenmann, Gilles; Dufresne, Matthieu; Vazquez, José; Mose, Robert
2017-10-01
The purpose of this study is to develop and validate a numerical tool for evaluating the performance of a settling basin regarding the trapping of suspended matter. The Euler-Lagrange approach was chosen to model the flow and sediment transport. The numerical model developed relies on the open source library OpenFOAM ® , enhanced with new particle/wall interaction conditions to limit sediment deposition in zones with favourable hydrodynamic conditions (shear stress, turbulent kinetic energy). In particular, a new relation is proposed for calculating the turbulent kinetic energy threshold as a function of the properties of each particle (diameter and density). The numerical model is compared to three experimental datasets taken from the literature and collected for scale models of basins. The comparison of the numerical and experimental results permits concluding on the model's capacity to predict the trapping of particles in a settling basin with an absolute error in the region of 5% when the sediment depositions occur over the entire bed. In the case of sediment depositions localised in preferential zones, their distribution is reproduced well by the model and trapping efficiency is evaluated with an absolute error in the region of 10% (excluding cases of particles with very low density).
Simulation-Based Evaluation of Learning Sequences for Instructional Technologies
ERIC Educational Resources Information Center
McEneaney, John E.
2016-01-01
Instructional technologies critically depend on systematic design, and learning hierarchies are a commonly advocated tool for designing instructional sequences. But hierarchies routinely allow numerous sequences and choosing an optimal sequence remains an unsolved problem. This study explores a simulation-based approach to modeling learning…
Vulnerability assessment of medieval civic towers as a tool for retrofitting design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casciati, Sara; Faravelli, Lucia
2008-07-08
The seismic vulnerability of an ancient civic bell-tower is studied. Rather than seeing it as an intermediate stage toward a risk analysis, the assessment of vulnerability is here pursued for the purpose of optimizing the retrofit design. The vulnerability curves are drawn by carrying out a single time history analysis of a model calibrated on the basis of experimental data. From the results of this analysis, the medians of three selected performance parameters are estimated, and they are used to compute, for each of them, the probability of exceeding or attaining the three corresponding levels of light, moderate and severemore » damage. The same numerical model is then used to incorporate the effects of several retrofitting solutions and to re-estimate the associated vulnerability curves. The ultimate goal is to provide a numerical tool able to drive the optimization process of a retrofit design by the comparison of the vulnerability estimates associated with the different retrofitting solutions.« less
Numerically stable finite difference simulation for ultrasonic NDE in anisotropic composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Quintanilla, Francisco Hernando; Cole, Christina M.
2018-04-01
Simulation tools can enable optimized inspection of advanced materials and complex geometry structures. Recent work at NASA Langley is focused on the development of custom simulation tools for modeling ultrasonic wave behavior in composite materials. Prior work focused on the use of a standard staggered grid finite difference type of mathematical approach, by implementing a three-dimensional (3D) anisotropic Elastodynamic Finite Integration Technique (EFIT) code. However, observations showed that the anisotropic EFIT method displays numerically unstable behavior at the locations of stress-free boundaries for some cases of anisotropic materials. This paper gives examples of the numerical instabilities observed for EFIT and discusses the source of instability. As an alternative to EFIT, the 3D Lebedev Finite Difference (LFD) method has been implemented. The paper briefly describes the LFD approach and shows examples of stable behavior in the presence of stress-free boundaries for a monoclinic anisotropy case. The LFD results are also compared to experimental results and dispersion curves.
NASA Astrophysics Data System (ADS)
Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.
2018-04-01
The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.
Airport Viz - a 3D Tool to Enhance Security Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Daniel B
2006-01-01
In the summer of 2000, the National Safe Skies Alliance (NSSA) awarded a project to the Applied Visualization Center (AVC) at the University of Tennessee, Knoxville (UTK) to develop a 3D computer tool to assist the Federal Aviation Administration security group, now the Transportation Security Administration (TSA), in evaluating new equipment and procedures to improve airport checkpoint security. A preliminary tool was demonstrated at the 2001 International Aviation Security Technology Symposium. Since then, the AVC went on to construct numerous detection equipment models as well as models of several airports. Airport Viz has been distributed by the NSSA to amore » number of airports around the country which are able to incorporate their own CAD models into the software due to its unique open architecture. It provides a checkpoint design and passenger flow simulation function, a layout design and simulation tool for checked baggage and cargo screening, and a means to assist in the vulnerability assessment of airport access points for pedestrians and vehicles.« less
Lane, J.W.; Buursink, M.L.; Haeni, F.P.; Versteeg, R.J.
2000-01-01
The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.The suitability of common-offset ground-penetrating radar (GPR) to detect free-phase hydrocarbons in bedrock fractures was evaluated using numerical modeling and physical experiments. The results of one- and two-dimensional numerical modeling at 100 megahertz indicate that GPR reflection amplitudes are relatively insensitive to fracture apertures ranging from 1 to 4 mm. The numerical modeling and physical experiments indicate that differences in the fluids that fill fractures significantly affect the amplitude and the polarity of electromagnetic waves reflected by subhorizontal fractures. Air-filled and hydrocarbon-filled fractures generate low-amplitude reflections that are in-phase with the transmitted pulse. Water-filled fractures create reflections with greater amplitude and opposite polarity than those reflections created by air-filled or hydrocarbon-filled fractures. The results from the numerical modeling and physical experiments demonstrate it is possible to distinguish water-filled fracture reflections from air- or hydrocarbon-filled fracture reflections, nevertheless subsurface heterogeneity, antenna coupling changes, and other sources of noise will likely make it difficult to observe these changes in GPR field data. This indicates that the routine application of common-offset GPR reflection methods for detection of hydrocarbon-filled fractures will be problematic. Ideal cases will require appropriately processed, high-quality GPR data, ground-truth information, and detailed knowledge of subsurface physical properties. Conversely, the sensitivity of GPR methods to changes in subsurface physical properties as demonstrated by the numerical and experimental results suggests the potential of using GPR methods as a monitoring tool. GPR methods may be suited for monitoring pumping and tracer tests, changes in site hydrologic conditions, and remediation activities.
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.
Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio
2010-03-26
Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
NASA Astrophysics Data System (ADS)
Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2004-06-01
Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.
NASA Technical Reports Server (NTRS)
daSilva, Arlinda
2012-01-01
A model-based Observing System Simulation Experiment (OSSE) is a framework for numerical experimentation in which observables are simulated from fields generated by an earth system model, including a parameterized description of observational error characteristics. Simulated observations can be used for sampling studies, quantifying errors in analysis or retrieval algorithms, and ultimately being a planning tool for designing new observing missions. While this framework has traditionally been used to assess the impact of observations on numerical weather prediction, it has a much broader applicability, in particular to aerosols and chemical constituents. In this talk we will give a general overview of Observing System Simulation Experiments (OSSE) activities at NASA's Global Modeling and Assimilation Office, with focus on its emerging atmospheric composition component.
Mathematical modelling and numerical simulation of forces in milling process
NASA Astrophysics Data System (ADS)
Turai, Bhanu Murthy; Satish, Cherukuvada; Prakash Marimuthu, K.
2018-04-01
Machining of the material by milling induces forces, which act on the work piece material, tool and which in turn act on the machining tool. The forces involved in milling process can be quantified, mathematical models help to predict these forces. A lot of research has been carried out in this area in the past few decades. The current research aims at developing a mathematical model to predict forces at different levels which arise machining of Aluminium6061 alloy. Finite element analysis was used to develop a FE model to predict the cutting forces. Simulation was done for varying cutting conditions. Different experiments was designed using Taguchi method. A L9 orthogonal array was designed and the output was measure for the different experiments. The same was used to develop the mathematical model.
RF Models for Plasma-Surface Interactions in VSim
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Smithe, D. N.; Pankin, A. Y.; Roark, C. M.; Zhou, C. D.; Stoltz, P. H.; Kruger, S. E.
2014-10-01
An overview of ongoing enhancements to the Plasma Discharge (PD) module of Tech-X's VSim software tool is presented. A sub-grid kinetic sheath model, developed for the accurate computation of sheath potentials near metal and dielectric-coated walls, enables the physical effects of DC and RF sheath physics to be included in macroscopic-scale plasma simulations that need not explicitly resolve sheath scale lengths. Sheath potential evolution, together with particle behavior near the sheath, can thus be simulated in complex geometries. Generalizations of the model to include sputtering, secondary electron emission, and effects from multiple ion species and background magnetic fields are summarized; related numerical results are also presented. In addition, improved tools for plasma chemistry and IEDF/EEDF visualization and modeling are discussed, as well as our initial efforts toward the development of hybrid fluid/kinetic transition capabilities within VSim. Ultimately, we aim to establish VSimPD as a robust, efficient computational tool for modeling industrial plasma processes. Supported by US DoE SBIR-I/II Award DE-SC0009501.
Development and application of theoretical models for Rotating Detonation Engine flowfields
NASA Astrophysics Data System (ADS)
Fievisohn, Robert
As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
Towards Automatic Processing of Virtual City Models for Simulations
NASA Astrophysics Data System (ADS)
Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2016-10-01
Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.
Scenario Evaluator for Electrical Resistivity survey pre-modeling tool
Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D.; Halford, Keith J.; Binley, Andrew; Lane, John W.; Werkema, Dale D.
2017-01-01
Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.
Performance evaluation of Bragg coherent diffraction imaging
Ozturk, Hande; Huang, X.; Yan, H.; ...
2017-10-03
In this study, we present a numerical framework for modeling three-dimensional (3D) diffraction data in Bragg coherent diffraction imaging (Bragg CDI) experiments and evaluating the quality of obtained 3D complex-valued real-space images recovered by reconstruction algorithms under controlled conditions. The approach is used to systematically explore the performance and the detection limit of this phase-retrieval-based microscopy tool. The numerical investigation suggests that the superb performance of Bragg CDI is achieved with an oversampling ratio above 30 and a detection dynamic range above 6 orders. The observed performance degradation subject to the data binning processes is also studied. Furthermore, this numericalmore » tool can be used to optimize experimental parameters and has the potential to significantly improve the throughput of Bragg CDI method.« less
Higher-order automatic differentiation of mathematical functions
NASA Astrophysics Data System (ADS)
Charpentier, Isabelle; Dal Cappello, Claude
2015-04-01
Functions of mathematical physics such as the Bessel functions, the Chebyshev polynomials, the Gauss hypergeometric function and so forth, have practical applications in many scientific domains. On the one hand, differentiation formulas provided in reference books apply to real or complex variables. These do not account for the chain rule. On the other hand, based on the chain rule, the automatic differentiation has become a natural tool in numerical modeling. Nevertheless automatic differentiation tools do not deal with the numerous mathematical functions. This paper describes formulas and provides codes for the higher-order automatic differentiation of mathematical functions. The first method is based on Faà di Bruno's formula that generalizes the chain rule. The second one makes use of the second order differential equation they satisfy. Both methods are exemplified with the aforementioned functions.
Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L.D.
2000-04-01
Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less
Consistent Chemical Mechanism from Collaborative Data Processing
Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...
2016-04-01
Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less
Fast analysis of radionuclide decay chain migration
NASA Astrophysics Data System (ADS)
Chen, J. S.; Liang, C. P.; Liu, C. W.; Li, L.
2014-12-01
A novel tool for rapidly predicting the long-term plume behavior of an arbitrary length radionuclide decay chain is presented in this study. This fast tool is achieved based on generalized analytical solutions in compact format derived for a set of two-dimensional advection-dispersion equations coupled with sequential first-order decay reactions in groundwater system. The performance of the developed tool is evaluated by a numerical model using a Laplace transform finite difference scheme. The results of performance evaluation indicate that the developed model is robust and accurate. The developed model is then used to fast understand the transport behavior of a four-member radionuclide decay chain. Results show that the plume extents and concentration levels of any target radionuclide are very sensitive to longitudinal, transverse dispersion, decay rate constant and retardation factor. The developed model are useful tools for rapidly assessing the ecological and environmental impact of the accidental radionuclide releases such as the Fukushima nuclear disaster where multiple radionuclides leaked through the reactor, subsequently contaminating the local groundwater and ocean seawater in the vicinity of the nuclear plant.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier
2018-06-23
Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.
Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
NASA Astrophysics Data System (ADS)
Nagendra, K. N.; Bagnulo, Stefano; Centeno, Rebecca; Jesús Martínez González, María.
2015-08-01
Preface; 1. Solar and stellar surface magnetic fields; 2. Future directions in astrophysical polarimetry; 3. Physical processes; 4. Instrumentation for astronomical polarimetry; 5. Data analysis techniques for polarization observations; 6. Polarization diagnostics of atmospheres and circumstellar environments; 7. Polarimetry as a tool for discovery science; 8. Numerical modeling of polarized emission; Author index.
NASA Astrophysics Data System (ADS)
Kasprak, A.; Brasington, J.; Hafen, K.; Wheaton, J. M.
2015-12-01
Numerical models that predict channel evolution through time are an essential tool for investigating processes that occur over timescales which render field observation intractable. However, available morphodynamic models generally take one of two approaches to the complex problem of computing morphodynamics, resulting in oversimplification of the relevant physics (e.g. cellular models) or faithful, yet computationally intensive, representations of the hydraulic and sediment transport processes at play. The practical implication of these approaches is that river scientists must often choose between unrealistic results, in the case of the former, or computational demands that render modeling realistic spatiotemporal scales of channel evolution impossible. Here we present a new modeling framework that operates at the timescale of individual competent flows (e.g. floods), and uses a highly-simplified sediment transport routine that moves volumes of material according to morphologically-derived characteristic transport distances, or path lengths. Using this framework, we have constructed an open-source morphodynamic model, termed MoRPHED, which is here applied, and its validity investigated, at timescales ranging from a single event to a decade on two braided rivers in the UK and New Zealand. We do not purport that MoRPHED is the best, nor even an adequate, tool for modeling braided river dynamics at this range of timescales. Rather, our goal in this research is to explore the utility, feasibility, and sensitivity of an event-scale, path-length-based modeling framework for predicting braided river dynamics. To that end, we further explore (a) which processes are naturally emergent and which must be explicitly parameterized in the model, (b) the sensitivity of the model to the choice of particle travel distance, and (c) whether an event-scale model timestep is adequate for producing braided channel dynamics. The results of this research may inform techniques for future morphodynamic modeling that seeks to maximize computational resources while modeling fluvial dynamics at the timescales of change.
2017-01-01
This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier–Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear–stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV. PMID:28982172
Application of fire and evacuation models in evaluation of fire safety in railway tunnels
NASA Astrophysics Data System (ADS)
Cábová, Kamila; Apeltauer, Tomáš; Okřinová, Petra; Wald, František
2017-09-01
The paper describes an application of numerical simulation of fire dynamics and evacuation of people in a tunnel. The software tool Fire Dynamics Simulator is used to simulate temperature resolution and development of smoke in a railway tunnel. Comparing to temperature curves which are usually used in the design stage results of the model show that the numerical model gives lower temperature of hot smoke layer. Outputs of the numerical simulation of fire also enable to improve models of evacuation of people during fires in tunnels. In the presented study the calculated high of smoke layer in the tunnel is in 10 min after the fire ignition lower than the level of 2.2 m which is considered as the maximal limit for safe evacuation. Simulation of the evacuation process in bigger scale together with fire dynamics can provide very valuable information about important security conditions like Available Safe Evacuation Time (ASET) vs Required Safe Evacuation Time (RSET). On given example in software EXODUS the paper summarizes selected results of evacuation model which should be in mind of a designer when preparing an evacuation plan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Xu, Jun; Cao, Lei
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less
Development of a Protocol and a Screening Tool for Selection of DNAPL Source Area Remediation
2012-02-01
the different remedial time frames used in the modeling case studies. • Matrix Diffusion: Modeling results demonstrated that in fractured rock ...being used for the ISCO, EISB and SEAR fractured rock numerical simulations at the field scale. Figure 2-4 presents the distribution of intrinsic...sedimentary limestone, sandstone, and shale, igneous basalts and granites, and metamorphous rock . For the modeling sites, three general geologies are
Modeling Production Plant Forming Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhee, M; Becker, R; Couch, R
2004-09-22
Engineering has simulation tools and experience in modeling forming processes. Y-12 personnel have expressed interest in validating our tools and experience against their manufacturing process activities such as rolling, casting, and forging etc. We have demonstrated numerical capabilities in a collaborative DOE/OIT project with ALCOA that is nearing successful completion. The goal was to use ALE3D to model Alcoa's slab rolling process in order to demonstrate a computational tool that would allow Alcoa to define a rolling schedule that would minimize the probability of ingot fracture, thus reducing waste and energy consumption. It is intended to lead to long-term collaborationmore » with Y-12 and perhaps involvement with other components of the weapons production complex. Using simulations to aid in design of forming processes can: decrease time to production; reduce forming trials and associated expenses; and guide development of products with greater uniformity and less scrap.« less
Verification and Validation Strategy for LWRS Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl M. Stoots; Richard R. Schultz; Hans D. Gougar
2012-09-01
One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less
An Object Model for a Rocket Engine Numerical Simulator
NASA Technical Reports Server (NTRS)
Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.
1998-01-01
Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.
NASA Technical Reports Server (NTRS)
Payne, Fred R.
1992-01-01
Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.
An elastic-plastic contact model for line contact structures
NASA Astrophysics Data System (ADS)
Zhu, Haibin; Zhao, Yingtao; He, Zhifeng; Zhang, Ruinan; Ma, Shaopeng
2018-06-01
Although numerical simulation tools are now very powerful, the development of analytical models is very important for the prediction of the mechanical behaviour of line contact structures for deeply understanding contact problems and engineering applications. For the line contact structures widely used in the engineering field, few analytical models are available for predicting the mechanical behaviour when the structures deform plastically, as the classic Hertz's theory would be invalid. Thus, the present study proposed an elastic-plastic model for line contact structures based on the understanding of the yield mechanism. A mathematical expression describing the global relationship between load history and contact width evolution of line contact structures was obtained. The proposed model was verified through an actual line contact test and a corresponding numerical simulation. The results confirmed that this model can be used to accurately predict the elastic-plastic mechanical behaviour of a line contact structure.
Numerical modeling of the near-field hydraulics of water wells.
Houben, Georg J; Hauschild, Sarah
2011-01-01
Numerical flow models can be a useful tool for dimensioning water wells and to investigate the hydraulics in their near-field. Fully laminar flow can be assumed for all models calculated up to the screen. Therefore models can be used to predict--at least qualitatively, neglecting turbulent losses inside the well--the spatial distribution of inflow into the well and the overall hydraulic performance of different combinations of aquifer parameters and technical installations. Models for both horizontal (plan view) and vertical flow (cross section) to wells were calculated for a variety of setups. For the latter, this included variations of hydraulic conductivity of the screen, pump position, and aquifer heterogeneity. Models of suction flow control devices showed that they indeed can homogenize inflow, albeit at the cost of elevated entrance losses. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
1999-01-01
The NASA Lewis Research Center is developing an environment for analyzing and designing aircraft engines-the Numerical Propulsion System Simulation (NPSS). NPSS will integrate multiple disciplines, such as aerodynamics, structure, and heat transfer, and will make use of numerical "zooming" on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS uses the latest computing and communication technologies to capture complex physical processes in a timely, cost-effective manner. The vision of NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Through the NASA/Industry Cooperative Effort agreement, NASA Lewis and industry partners are developing a new engine simulation called the National Cycle Program (NCP). NCP, which is the first step toward NPSS and is its initial framework, supports the aerothermodynamic system simulation process for the full life cycle of an engine. U.S. aircraft and airframe companies recognize NCP as the future industry standard common analysis tool for aeropropulsion system modeling. The estimated potential payoff for NCP is a $50 million/yr savings to industry through improved engineering productivity.
Advanced Concepts Theory Annual Report 1983.
1984-05-18
variety of theoretical models, tools, and computational strategies to understand, guide, and predict the behavior of high brightness, laboratory x-ray... theoretical models must treat hard and soft x-ray emission from different electron configurations with K, L, and M shells, and they must include... theoretical effort has basis for comprehending the trends which appear in the been devoted to elucidating the effects of opacity on the numerical results
Numerical Analyses for Low Reynolds Flow in a Ventricular Assist Device.
Lopes, Guilherme; Bock, Eduardo; Gómez, Luben
2017-06-01
Scientific and technological advances in blood pump developments have been driven by their importance in cardiac patient treatments and in the expansion of life quality in assisted people. To improve and optimize the design and development, numerical tools were incorporated into the analyses of these mechanisms and have become indispensable in their advances. This study analyzes the flow behavior with low impeller Reynolds number, for which there is no consensus on the full development of turbulence in ventricular assist devices (VAD). For supporting analyses, computational numerical simulations were carried out in different scenarios with the same rotation speed. Two modeling approaches were applied: laminar flow and turbulent flow with the standard, RNG and realizable κ - ε; the standard and SST κ - ω models; and Spalart-Allmaras models. The results agree with the literature for VAD and the range for transient flows in stirred tanks with an impeller Reynolds number around 2800 for the tested scenarios. The turbulent models were compared, and it is suggested, based on the expected physical behavior, the use of κ-ε RNG, standard and SST κ-ω, and Spalart-Allmaras models to numerical analyses for low impeller Reynolds numbers according to the tested flow scenarios. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Assessment of the National Combustion Code
NASA Technical Reports Server (NTRS)
Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing
2007-01-01
The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.
Batch mode grid generation: An endangered species
NASA Technical Reports Server (NTRS)
Schuster, David M.
1992-01-01
Non-interactive grid generation schemes should thrive as emphasis shifts from development of numerical analysis and design methods to application of these tools to real engineering problems. A strong case is presented for the continued development and application of non-interactive geometry modeling methods. Guidelines, strategies, and techniques for developing and implementing these tools are presented using current non-interactive grid generation methods as examples. These schemes play an important role in the development of multidisciplinary analysis methods and some of these applications are also discussed.
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia
2003-01-01
The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).
Tavčar, Gregor; Katrašnik, Tomaž
2014-01-01
The parallel straight channel PEM fuel cell model presented in this paper extends the innovative hybrid 3D analytic-numerical (HAN) approach previously published by the authors with capabilities to address ternary diffusion systems and counter-flow configurations. The model's core principle is modelling species transport by obtaining a 2D analytic solution for species concentration distribution in the plane perpendicular to the cannel gas-flow and coupling consecutive 2D solutions by means of a 1D numerical pipe-flow model. Electrochemical and other nonlinear phenomena are coupled to the species transport by a routine that uses derivative approximation with prediction-iteration. The latter is also the core of the counter-flow computation algorithm. A HAN model of a laboratory test fuel cell is presented and evaluated against a professional 3D CFD simulation tool showing very good agreement between results of the presented model and those of the CFD simulation. Furthermore, high accuracy results are achieved at moderate computational times, which is owed to the semi-analytic nature and to the efficient computational coupling of electrochemical kinetics and species transport.
A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI
Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.
2016-01-01
Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Some issues in data model mapping
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Alsabbagh, Jamal R.
1985-01-01
Numerous data models have been reported in the literature since the early 1970's. They have been used as database interfaces and as conceptual design tools. The mapping between schemas expressed according to the same data model or according to different models is interesting for theoretical and practical purposes. This paper addresses some of the issues involved in such a mapping. Of special interest are the identification of the mapping parameters and some current approaches for handling the various situations that require a mapping.
An Innovative Learning Model for Computation in First Year Mathematics
ERIC Educational Resources Information Center
Tonkes, E. J.; Loch, B. I.; Stace, A. W.
2005-01-01
MATLAB is a sophisticated software tool for numerical analysis and visualization. The University of Queensland has adopted Matlab as its official teaching package across large first year mathematics courses. In the past, the package has met severe resistance from students who have not appreciated their computational experience. Several main…
Implementation of the Automated Numerical Model Performance Metrics System
2011-09-26
question. As of this writing, the DSRC IBM AIX machines DaVinci and Pascal, and the Cray XT Einstein all use the PBS batch queuing system for...3.3). 12 Appendix A – General Automation System This system provides general purpose tools and a general way to automatically run
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
NASA Astrophysics Data System (ADS)
Granados, Xavier; Sánchez, Àlvar; López-López, Josep
2012-10-01
The development of superconducting applications and superconducting engineering requires the support of consistent tools which can provide models for obtaining a good understanding of the behaviour of the systems and predict novel features. These models aim to compute the behaviour of the superconducting systems, design superconducting devices and systems, and understand and test the behavior of the superconducting parts. 50 years ago, in 1962, Charles Bean provided the superconducting community with a model efficient enough to allow the computation of the response of a superconductor to external magnetic fields and currents flowing through in an understandable way: the so called critical-state model. Since then, in addition to the pioneering critical-state approach, other tools have been devised for designing operative superconducting systems, allowing integration of the superconducting design in nearly standard electromagnetic computer-aided design systems by modelling the superconducting parts with consideration of time-dependent processes. In April 2012, Barcelona hosted the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors (HTS), the third in a series of workshops started in Lausanne in 2010 and followed by Cambridge in 2011. The workshop reflected the state-of-the-art and the new initiatives of HTS modelling, considering mathematical, physical and technological aspects within a wide and interdisciplinary scope. Superconductor Science and Technology is now publishing a selection of papers from the workshop which have been selected for their high quality. The selection comprises seven papers covering mathematical, physical and technological topics which contribute to an improvement in the development of procedures, understanding of phenomena and development of applications. We hope that they provide a perspective on the relevance and growth that the modelling of HTS superconductors has achieved in the past 25 years.
Numerical Study of Magnetic Damping During Unidirectional Solidification
NASA Technical Reports Server (NTRS)
Li, Ben Q.
1997-01-01
A fully 3-D numerical model is developed to represent magnetic damping of complex fluid flow, heat transfer and electromagnetic field distributions in a melt cavity. The model is developed based on our in-house finite element code for the fluid flow, heat transfer and electromagnetic field calculations. The computer code has been tested against benchmark test problems that are solved by other commercial codes as well as analytical solutions whenever available. The numerical model is tested against numerical and experimental results for water reported in literature. With the model so tested, various numerical simulations are carried out for the Sn-35.5% Pb melt convection and temperature distribution in a cylindrical cavity with and without the presence of a transverse magnetic field. Numerical results show that magnetic damping can be effectively applied to reduce turbulence and flow levels in the melt undergoing solidification and over a certain threshold value a higher magnetic field resulted in a higher velocity reduction. It is found also that for a fully 3-D representation of the magnetic damping effects, the electric field induced in the melt by the applied DC magnetic field does not vanish, as some researchers suggested, and must be included even for molten metal and semiconductors. Also, for the study of the melt flow instability, a long enough time has to be applied to ensure the final fluid flow recirculation pattern. Moreover, our numerical results suggested that there seems to exist a threshold value of applied magnetic field, above which magnetic damping becomes possible and below which the convection in the melt is actually enhanced. Because of the limited financial resource allocated for the project, we are unable to carry out extensive study on this effect, which should warrant further theoretical and experimental study. In that endeavor, the developed numerical model should be very useful; and the model should serve as a useful tool for exploring necessary design parameters for planning magnetic damping experiments and interpreting the experimental results.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Porru, Marcella; Özkan, Leyla
2017-05-24
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
2017-01-01
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators. PMID:28603342
Fatigue Analysis of Rotating Parts. A Case Study for a Belt Driven Pulley
NASA Astrophysics Data System (ADS)
Sandu, Ionela; Tabacu, Stefan; Ducu, Catalin
2017-10-01
The present study is focused on the life estimation of a rotating part as a component of an engine assembly namely the pulley of the coolant pump. The goal of the paper is to develop a model, supported by numerical analysis, capable to predict the lifetime of the part. Starting from functional drawing, CAD Model and technical specifications of the part a numerical model was developed. MATLAB code was used to develop a tool to apply the load over the selected area. The numerical analysis was performed in two steps. The first simulation concerned the inertia relief due to rotational motion about the shaft (of the pump). Results from this simulation were saved and the stress - strain state used as initial conditions for the analysis with the load applied. The lifetime of a good part was estimated. A defect was created in order to investigate the influence over the working requirements. It was found that there is little influence with respect to the prescribed lifetime.
Toolpack mathematical software development environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterweil, L.
1982-07-21
The purpose of this research project was to produce a well integrated set of tools for the support of numerical computation. The project entailed the specification, design and implementation of both a diversity of tools and an innovative tool integration mechanism. This large configuration of tightly integrated tools comprises an environment for numerical software development, and has been named Toolpack/IST (Integrated System of Tools). Following the creation of this environment in prototype form, the environment software was readied for widespread distribution by transitioning it to a development organization for systematization, documentation and distribution. It is expected that public release ofmore » Toolpack/IST will begin imminently and will provide a basis for evaluation of the innovative software approaches taken as well as a uniform set of development tools for the numerical software community.« less
Understanding complex host-microbe interactions in Hydra
Bosch, Thomas C.G.
2012-01-01
Any multicellular organism may be considered a metaorganism or holobiont—comprised of the macroscopic host and synergistic interdependence with bacteria, archaea, fungi, viruses, and numerous other microbial and eukaryotic species including algal symbionts. Defining the individual microbe-host conversations in these consortia is a challenging but necessary step on the path to understanding the function of the associations as a whole. Dissecting the fundamental principles that underlie all host-microbe interactions requires simple animal models with only a few specific bacterial species. Here I present Hydra as such a model with one of the simplest epithelia in the animal kingdom, with the availability of a fully sequenced genome and numerous genomic tools, and with few associated bacterial species. PMID:22688725
NASA Astrophysics Data System (ADS)
Zirari, M.; Abdellah El-Hadj, A.; Bacha, N.
2010-03-01
A finite element method is used to simulate the deposition of the thermal spray coating process. A set of governing equations is solving by a volume of fluid method. For the solidification phenomenon, we use the specific heat method (SHM). We begin by comparing the present model with experimental and numerical model available in the literature. In this study, completely molten or semi-molten aluminum particle impacts a H13 tool steel substrate is considered. Next we investigate the effect of inclination of impact of a partially molten particle on flat substrate. It was found that the melting state of the particle has great effects on the morphologies of the splat.
NASA Astrophysics Data System (ADS)
Junker, Philipp; Hempel, Philipp
2017-12-01
It is well known that plastic deformations in shape memory alloys stabilize the martensitic phase. Furthermore, the knowledge concerning the plastic state is crucial for a reliable sustainability analysis of construction parts. Numerical simulations serve as a tool for the realistic investigation of the complex interactions between phase transformations and plastic deformations. To account also for irreversible deformations, we expand an energy-based material model by including a non-linear isotropic hardening plasticity model. An implementation of this material model into commercial finite element programs, e.g., Abaqus, offers the opportunity to analyze entire structural components at low costs and fast computation times. Along with the theoretical derivation and expansion of the model, several simulation results for various boundary value problems are presented and interpreted for improved construction designing.
NASA Astrophysics Data System (ADS)
Isobe, Masaharu
Hard sphere/disk systems are among the simplest models and have been used to address numerous fundamental problems in the field of statistical physics. The pioneering numerical works on the solid-fluid phase transition based on Monte Carlo (MC) and molecular dynamics (MD) methods published in 1957 represent historical milestones, which have had a significant influence on the development of computer algorithms and novel tools to obtain physical insights. This chapter addresses the works of Alder's breakthrough regarding hard sphere/disk simulation: (i) event-driven molecular dynamics, (ii) long-time tail, (iii) molasses tail, and (iv) two-dimensional melting/crystallization. From a numerical viewpoint, there are serious issues that must be overcome for further breakthrough. Here, we present a brief review of recent progress in this area.
Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M
2011-09-24
Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.
Improving the seismic small-scale modelling by comparison with numerical methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann
2017-10-01
The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.
NASA Astrophysics Data System (ADS)
Pickett, Leon, Jr.
Past research has conclusively shown that long fiber structural composites possess superior specific energy absorption characteristics as compared to steel and aluminum structures. However, destructive physical testing of composites is very costly and time consuming. As a result, numerical solutions are desirable as an alternative to experimental testing. Up until this point, very little numerical work has been successful in predicting the energy absorption of composite crush structures. This research investigates the ability to use commercially available numerical modeling tools to approximate the energy absorption capability of long-fiber composite crush tubes. This study is significant because it provides a preliminary analysis of the suitability of LS-DYNA to numerically characterize the crushing behavior of a dynamic axial impact crushing event. Composite crushing theory suggests that there are several crushing mechanisms occurring during a composite crush event. This research evaluates the capability and suitability of employing, LS-DYNA, to simulate the dynamic crush event of an E-glass/epoxy cylindrical tube. The model employed is the composite "progressive failure model", a much more limited failure model when compared to the experimental failure events which naturally occur. This numerical model employs (1) matrix cracking, (2) compression, and (3) fiber breakage failure modes only. The motivation for the work comes from the need to reduce the significant cost associated with experimental trials. This research chronicles some preliminary efforts to better understand the mechanics essential in pursuit of this goal. The immediate goal is to begin to provide deeper understanding of a composite crush event and ultimately create a viable alternative to destructive testing of composite crush tubes.
Ruiz-Vargas, A; Mohd Rosli, R; Ivorra, A; Arkwright, J W
2018-01-08
Intraluminal electrical impedance is a well-known diagnostic tool used to study bolus movement in the human esophagus. However, it is use in the human colon it is hindered by the fact that the content cannot be controlled and may include liquid, gas, solid, or a mixture of these at any one time. This article investigates the use of complex impedance spectroscopy to study different luminal content (liquid and gas). An excised section of guinea pig proximal colon was placed in an organ bath with Krebs solution at 37°C and a custom built bioimpedance catheter was placed in the lumen. Liquid (Krebs) and gas (air) content was pumped through the lumen and the intraluminal impedance was measured at five different frequencies (1, 5.6, 31.6, 177.18 kHz and 1 MHz) at 10 samples per second. A numerical model was created to model the passage of bolus with different content and compared to the experimental data. Differences in mean impedance magnitude and phase angle were found (from 1 to 177.18 kHz) for different contents. The numerical results qualitatively agreed with those in the experimental study. Conductivities of bolus had an effect on detecting its passage. Complex impedance spectroscopy can distinguish between different luminal content within a range of measuring frequencies. The numerical model showed the importance of bolus conductivities for bolus transit studies in those where the bolus is controlled. © 2018 John Wiley & Sons Ltd.
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Numerical modeling of the 2017 active seismic infrasound balloon experiment
NASA Astrophysics Data System (ADS)
Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.
2017-12-01
We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Z.; Carlson, T. J.; Ploskey, G. R.
2005-11-01
Bio-indexing of hydro turbines has been identified as an important means to optimize passage conditions for fish by identifying operations for existing and new design turbines that minimize the probability of injury. Cost-effective implementation of bio-indexing requires the use of tools such as numerical and physical turbine models to generate hypotheses for turbine operations that can be tested at prototype scales using live fish. Blade strike has been proposed as an index variable for the biological performance of turbines. Report reviews an evaluation of the use of numerical blade-strike models as a means with which to predict the probability ofmore » blade strike and injury of juvenile salmon smolt passing through large Kaplan turbines on the mainstem Columbia River.« less
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
Hu, Rui
2016-11-19
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
Numerical Models of Alaskan Tectonics: A Review and Looking Ahead to a New Era of Research
NASA Astrophysics Data System (ADS)
Jadamec, M. A.; Freymueller, J. T.
2015-12-01
The Pacific-North American plate boundary in Alaska is in the current scientific spotlight, as a highlighted tectonic region for integrated scientific investigation. It is timely, therefore, to step back and examine the previous numerical modeling studies of Alaska. Reviewing the numerical models is valuable, as geodynamic modeling can be a predictive tool that can guide and target field studies, both geologic and geophysical. This review presents a comparison of the previous numerical modeling studies of the Alaska-Aleutian subduction zone, including the mainland and extending into northwestern Canada. By distinguishing between the model set-up, governing equations, and underlying assumptions, non-modelers can more easily understand under what context the modeling predictions can be interpreted. Several key features in the Alaska tectonic setting appear in all the models to have a first order effect on the resulting deformation, such as the plate margin geometry and Denali fault. In addition, there are aspects of the tectonic setting that lead to very different results depending how they are implemented into the models. For example, models which fix the slab velocity to surface plate motions predict lower mantle flow rates than models that allow the slab to steepen. Despite the previous modeling studies, many unanswered questions remain, including the formation of the Wrangell volcanics, the driver for motion in western and interior Alaska, and the timing and nature of slab deformation. A synthesis of this kind will be of value to geologists, geodeticists, seismologists, volcanologists, sedimentologists, geochemists, as well as geodynamicists.
NASA Astrophysics Data System (ADS)
Wu, Yanling
2018-05-01
In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.
The MATH--Open Source Application for Easier Learning of Numerical Mathematics
ERIC Educational Resources Information Center
Glaser-Opitz, Henrich; Budajová, Kristina
2016-01-01
The article introduces a software application (MATH) supporting an education of Applied Mathematics, with focus on Numerical Mathematics. The MATH is an easy to use tool supporting various numerical methods calculations with graphical user interface and integrated plotting tool for graphical representation written in Qt with extensive use of Qwt…
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Numerical simulation of thermal stress distributions in Czochralski-grown silicon crystals
NASA Astrophysics Data System (ADS)
Kumar, M. Avinash; Srinivasan, M.; Ramasamy, P.
2018-04-01
Numerical simulation is one of the important tools in the investigation and optimization of the single-crystal silicon grown by the Czochralski (Cz) method. A 2D steady global heat transfer model was used to investigate the temperature distribution and the thermal stress distributions at particular crystal position during the Cz growth process. The computation determines the thermal stress such as von Mises stress and maximum shear stress distribution along grown crystal and shows possible reason for dislocation formation in the Cz-grown single-crystal silicon.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
A numerical model on thermodynamic analysis of free piston Stirling engines
NASA Astrophysics Data System (ADS)
Mou, Jian; Hong, Guotong
2017-02-01
In this paper, a new numerical thermodynamic model which bases on the energy conservation law has been used to analyze the free piston Stirling engine. In the model all data was taken from a real free piston Stirling engine which has been built in our laboratory. The energy conservation equations have been applied to expansion space and compression space of the engine. The equation includes internal energy, input power, output power, enthalpy and the heat losses. The heat losses include regenerative heat conduction loss, shuttle heat loss, seal leakage loss and the cavity wall heat conduction loss. The numerical results show that the temperature of expansion space and the temperature of compression space vary with the time. The higher regeneration effectiveness, the higher efficiency and bigger output work. It is also found that under different initial pressures, the heat source temperature, phase angle and engine work frequency pose different effects on the engine’s efficiency and power. As a result, the model is expected to be a useful tool for simulation, design and optimization of Stirling engines.
Analysis and numerical modelling of eddy current damper for vibration problems
NASA Astrophysics Data System (ADS)
Irazu, L.; Elejabarrieta, M. J.
2018-07-01
This work discusses a contactless eddy current damper, which is used to attenuate structural vibration. Eddy currents can remove energy from dynamic systems without any contact and, thus, without adding mass or modifying the rigidity of the structure. An experimental modal analysis of a cantilever beam in the absence of and under a partial magnetic field is conducted in the bandwidth of 01 kHz. The results show that the eddy current phenomenon can attenuate the vibration of the entire structure without modifying the natural frequencies or the mode shapes of the structure itself. In this study, a new inverse method to numerically determine the dynamic properties of the contactless eddy current damper is proposed. The proposed inverse method and the eddy current model based on a lineal viscous force are validated by a practical application. The numerically obtained transfer function correlates with the experimental one, thus showing good agreement in the entire bandwidth of 01 kHz. The proposed method provides an easy and quick tool to model and predict the dynamic behaviour of the contactless eddy current damper, thereby avoiding the use of complex analytical models.
Verification of the Icarus Material Response Tool
NASA Technical Reports Server (NTRS)
Schroeder, Olivia; Palmer, Grant; Stern, Eric; Schulz, Joseph; Muppidi, Suman; Martin, Alexandre
2017-01-01
Due to the complex physics encountered during reentry, material response solvers are used for two main purposes: improve the understanding of the physical phenomena; and design and size thermal protection systems (TPS). Icarus, is a three dimensional, unstructured material response tool that is intended to be used for design while maintaining the flexibility to easily implement physical models as needed. Because TPS selection and sizing is critical, it is of the utmost importance that the design tools be extensively verified and validated before their use. Verification tests aim at insuring that the numerical schemes and equations are implemented correctly by comparison to analytical solutions and grid convergence tests.
Animal models: an important tool in mycology.
Capilla, Javier; Clemons, Karl V; Stevens, David A
2007-12-01
Animal models of fungal infections are, and will remain, a key tool in the advancement of the medical mycology. Many different types of animal models of fungal infection have been developed, with murine models the most frequently used, for studies of pathogenesis, virulence, immunology, diagnosis, and therapy. The ability to control numerous variables in performing the model allows us to mimic human disease states and quantitatively monitor the course of the disease. However, no single model can answer all questions and different animal species or different routes of infection can show somewhat different results. Thus, the choice of which animal model to use must be made carefully, addressing issues of the type of human disease to mimic, the parameters to follow and collection of the appropriate data to answer those questions being asked. This review addresses a variety of uses for animal models in medical mycology. It focuses on the most clinically important diseases affecting humans and cites various examples of the different types of studies that have been performed. Overall, animal models of fungal infection will continue to be valuable tools in addressing questions concerning fungal infections and contribute to our deeper understanding of how these infections occur, progress and can be controlled and eliminated.
XBeach-G: a tool for predicting gravel barrier response to extreme storm conditions
NASA Astrophysics Data System (ADS)
Masselink, Gerd; Poate, Tim; McCall, Robert; Roelvink, Dano; Russell, Paul; Davidson, Mark
2014-05-01
Gravel beaches protect low-lying back-barrier regions from flooding during storm events and their importance to society is widely acknowledged. Unfortunately, breaching and extensive storm damage has occurred at many gravel sites and this is likely to increase as a result of sea-level rise and enhanced storminess due to climate change. Limited scientific guidance is currently available to provide beach managers with operational management tools to predict the response of gravel beaches to storms. The New Understanding and Prediction of Storm Impacts on Gravel beaches (NUPSIG) project aims to improve our understanding of storm impacts on gravel coastal environments and to develop a predictive capability by modelling these impacts. The NUPSIG project uses a 5-pronged approach to address its aim: (1) analyse hydrodynamic data collected during a proto-type laboratory experiment on a gravel beach; (2) collect hydrodynamic field data on a gravel beach under a range of conditions, including storm waves with wave heights up to 3 m; (3) measure swash dynamics and beach response on 10 gravel beaches during extreme wave conditions with wave heights in excess of 3 m; (4) use the data collected under 1-3 to develop and validate a numerical model to model hydrodynamics and morphological response of gravel beaches under storm conditions; and (5) develop a tool for end-users, based on the model formulated under (4), for predicting storm response of gravel beaches and barriers. The aim of this presentation is to present the key results of the NUPSIG project and introduce the end-user tool for predicting storm response on gravel beaches. The model is based on the numerical model XBeach, and different forcing scenarios (wave and tides), barrier configurations (dimensions) and sediment characteristics are easily uploaded for model simulations using a Graphics User Interface (GUI). The model can be used to determine the vulnerability of gravel barriers to storm events, but can also be used to help optimise design criteria for gravel barriers to reduce their vulnerability and enhance their coastal protection ability.
NASA Astrophysics Data System (ADS)
Cotterman, K. A.; Follum, M. L.; Pradhan, N. R.; Niemann, J. D.
2017-12-01
Flooding impacts numerous aspects of society, from localized flash floods to continental-scale flood events. Many numerical flood models focus solely on riverine flooding, with some capable of capturing both localized and continental-scale flood events. However, these models neglect flooding away from channels that are related to excessive ponding, typically found in areas with flat terrain and poorly draining soils. In order to obtain a holistic view of flooding, we combine flood results from the Streamflow Prediction Tool (SPT), a riverine flood model, with soil moisture downscaling techniques to determine if a better representation of flooding is obtained. This allows for a more holistic understanding of potential flood prone areas, increasing the opportunity for more accurate warnings and evacuations during flooding conditions. Thirty-five years of near-global historical streamflow is reconstructed with continental-scale flow routing of runoff from global land surface models. Elevation data was also obtained worldwide, to establish a relationship between topographic attributes and soil moisture patterns. Derived soil moisture data is validated against observed soil moisture, increasing confidence in the ability to accurately capture soil moisture patterns. Potential flooding situations can be examined worldwide, with this study focusing on the United States, Central America, and the Philippines.
Numerical modeling of higher order magnetic moments in UXO discrimination
Sanchez, V.; Yaoguo, L.; Nabighian, M.N.; Wright, D.L.
2008-01-01
The surface magnetic anomaly observed in unexploded ordnance (UXO) clearance is mainly dipolar, and consequently, the dipole is the only magnetic moment regularly recovered in UXO discrimination. The dipole moment contains information about the intensity of magnetization but lacks information about the shape of the target. In contrast, higher order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and to show its potential utility in UXO clearance, we present a numerical modeling study of UXO and related metallic objects. The tool for the modeling is a nonlinear integral equation describing magnetization within isolated compact objects of high susceptibility. A solution for magnetization distribution then allows us to compute the magnetic multipole moments of the object, analyze their relationships, and provide a depiction of the anomaly produced by different moments within the object. Our modeling results show the presence of significant higher order moments for more asymmetric objects, and the fields of these higher order moments are well above the noise level of magnetic gradient data. The contribution from higher order moments may provide a practical tool for improved UXO discrimination. ?? 2008 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Christopher A.
In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Development of a Three-Dimensional, Unstructured Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph C.; Stern, Eric C.; Muppidi, Suman; Palmer, Grant E.; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. This extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
NASA Astrophysics Data System (ADS)
Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.
2014-12-01
Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.
NASA Astrophysics Data System (ADS)
Podgorney, Robert; Coleman, Justin; Wilkins, Amdrew; Huang, Hai; Veeraraghavan, Swetha; Xia, Yidong; Permann, Cody
2017-04-01
Numerical modeling has played an important role in understanding the behavior of coupled subsurface thermal-hydro-mechanical (THM) processes associated with a number of energy and environmental applications since as early as the 1970s. While the ability to rigorously describe all key tightly coupled controlling physics still remains a challenge, there have been significant advances in recent decades. These advances are related primarily to the exponential growth of computational power, the development of more accurate equations of state, improvements in the ability to represent heterogeneity and reservoir geometry, and more robust nonlinear solution schemes. The work described in this paper documents the development and linkage of several fully-coupled and fully-implicit modeling tools. These tools simulate: (1) the dynamics of fluid flow, heat transport, and quasi-static rock mechanics; (2) seismic wave propagation from the sources of energy release through heterogeneous material; and (3) the soil-structural damage resulting from ground acceleration. These tools are developed in Idaho National Laboratory's parallel Multiphysics Object Oriented Simulation Environment, and are integrated together using a global implicit approach. The governing equations are presented, the numerical approach for simultaneously solving and coupling the three coupling physics tools is discussed, and the data input and output methodology is outlined. An example is presented to demonstrate the capabilities of the coupled multiphysics approach. The example involves simulating a system conceptually similar to the geothermal development in Basel Switzerland, and the resultant induced seismicity, ground motion and structural damage is predicted.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
A material based approach to creating wear resistant surfaces for hot forging
NASA Astrophysics Data System (ADS)
Babu, Sailesh
Tools and dies used in metal forming are characterized by extremely high temperatures at the interface, high local pressures and large metal to metal sliding. These harsh conditions result in accelerated wear of tooling. Lubrication of tools, done to improve metal flow drastically quenches the surface layers of the tools and compounds the tool failure problem. This phenomenon becomes a serious issue when parts forged at complex and are expected to meet tight tolerances. Unpredictable and hence uncontrolled wear and degradation of tooling result in poor part quality and premature tool failure that result in high scrap, shop downtime, poor efficiency and high cost. The objective of this dissertation is to develop a computer-based methodology for analyzing the requirements hot forging tooling to resist wear and plastic deformation and wear and predicting life cycle of forge tooling. Development of such is a system is complicated by the fact that wear and degradation of tooling is influenced by not only the die material used but also numerous process controls like lubricant, dilution ratio, forging temperature, equipment used, tool geometries among others. Phenomenological models available u1 the literature give us a good thumb rule to selecting materials but do not provide a way to evaluate pits performance in field. Once a material is chosen, there are no proven approaches to create surfaces out of these materials. Coating approaches like PVD and CVD cannot generate thick coatings necessary to withstand the conditions under hot forging. Welding cannot generate complex surfaces without several secondary operations like heat treating and machining. If careful procedures are not followed, welds crack and seldom survive forging loads. There is a strong need for an approach to selectively, reliably and precisely deposit material of choice reliably on an existing surface which exhibit not only good tribological properties but also good adhesion to the substrate. Dissertation outlines development of a new cyclic contact test design to recreate intermittent tempering seen in hot forging. This test has been used to validate the use of tempering parameters in modeling of in-service softening of tool steel surfaces. The dissertation also outlines an industrial case study, conducted at a forging company, to validate the wear model. This dissertation also outlines efforts at Ohio State University, to deposit Nickel Aluminide on AISI H13 substrate, using Laser Engineered Net Shaping (LENS). Dissertation reports results from an array of experiments conducted using LENS 750 machine, at various power levels, table speeds and hatch spacing. Results pertaining to bond quality, surface finish, compositional gradients and hardness are provided. Also, a thermal-based finite element numerical model that was used to simulate the LENS process is presented, along with some demonstrated results.
Monitoring Object Library Usage and Changes
NASA Technical Reports Server (NTRS)
Owen, R. K.; Craw, James M. (Technical Monitor)
1995-01-01
The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.
NASA Astrophysics Data System (ADS)
Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2018-03-01
Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.
NASA Astrophysics Data System (ADS)
Vlase, A.; Blăjină, O.; Iacob, M.; Darie, V.
2015-11-01
Two addressed issues in the research regarding the cutting machinability, establishing of the optimum cutting processing conditions and the optimum cutting regime, do not yet have sufficient data for solving. For this reason, in the paper it is proposed the optimization of the tool life and the cutting speed at the drilling of a certain stainless steel in terms of the maximum productivity. For this purpose, a nonlinear programming mathematical model to maximize the productivity at the drilling of the steel is developed in the paper. The optimum cutting tool life and the associated cutting tool speed are obtained by solving the numerical mathematical model. Using this proposed model allows increasing the accuracy in the prediction of the productivity for the drilling of a certain stainless steel and getting the optimum tool life and the optimum cutting speed for the maximum productivity. The results presented in this paper can be used in the production activity, in order to increase the productivity of the stainless steels machining. Also new research directions for the specialists in this interested field may come off from this paper.
Interactive visualization to advance earthquake simulation
Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.
2008-01-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.
Using Modern Design Tools for Digital Avionics Development
NASA Technical Reports Server (NTRS)
Hyde, David W.; Lakin, David R., II; Asquith, Thomas E.
2000-01-01
Using Modem Design Tools for Digital Avionics Development Shrinking development time and increased complexity of new avionics forces the designer to use modem tools and methods during hardware development. Engineers at the Marshall Space Flight Center have successfully upgraded their design flow and used it to develop a Mongoose V based radiation tolerant processor board for the International Space Station's Water Recovery System. The design flow, based on hardware description languages, simulation, synthesis, hardware models, and full functional software model libraries, allowed designers to fully simulate the processor board from reset, through initialization before any boards were built. The fidelity of a digital simulation is limited to the accuracy of the models used and how realistically the designer drives the circuit's inputs during simulation. By using the actual silicon during simulation, device modeling errors are reduced. Numerous design flaws were discovered early in the design phase when they could be easily fixed. The use of hardware models and actual MIPS software loaded into full functional memory models also provided checkout of the software development environment. This paper will describe the design flow used to develop the processor board and give examples of errors that were found using the tools. An overview of the processor board firmware will also be covered.
Khan, Farman U; Qamar, Shamsul
2017-05-01
A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Temperature Measurement and Numerical Prediction in Machining Inconel 718.
Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-06-30
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
NASA Technical Reports Server (NTRS)
Likhanskii, Alexandre
2012-01-01
This report is the final report of a SBIR Phase I project. It is identical to the final report submitted, after some proprietary information of administrative nature has been removed. The development of a numerical simulation tool for dielectric barrier discharge (DBD) plasma actuator is reported. The objectives of the project were to analyze and predict DBD operation at wide range of ambient gas pressures. It overcomes the limitations of traditional DBD codes which are limited to low-speed applications and have weak prediction capabilities. The software tool allows DBD actuator analysis and prediction for subsonic to hypersonic flow regime. The simulation tool is based on the VORPAL code developed by Tech-X Corporation. VORPAL's capability of modeling DBD plasma actuator at low pressures (0.1 to 10 torr) using kinetic plasma modeling approach, and at moderate to atmospheric pressures (1 to 10 atm) using hydrodynamic plasma modeling approach, were demonstrated. In addition, results of experiments with pulsed+bias DBD configuration that were performed for validation purposes are reported.
A platform for dynamic simulation and control of movement based on OpenSim and MATLAB.
Mansouri, Misagh; Reinbolt, Jeffrey A
2012-05-11
Numerical simulations play an important role in solving complex engineering problems and have the potential to revolutionize medical decision making and treatment strategies. In this paper, we combine the rapid model-based design, control systems and powerful numerical method strengths of MATLAB/Simulink with the simulation and human movement dynamics strengths of OpenSim by developing a new interface between the two software tools. OpenSim is integrated with Simulink using the MATLAB S-function mechanism, and the interface is demonstrated using both open-loop and closed-loop control systems. While the open-loop system uses MATLAB/Simulink to separately reproduce the OpenSim Forward Dynamics Tool, the closed-loop system adds the unique feature of feedback control to OpenSim, which is necessary for most human movement simulations. An arm model example was successfully used in both open-loop and closed-loop cases. For the open-loop case, the simulation reproduced results from the OpenSim Forward Dynamics Tool with root mean square (RMS) differences of 0.03° for the shoulder elevation angle and 0.06° for the elbow flexion angle. MATLAB's variable step-size integrator reduced the time required to generate the forward dynamic simulation from 7.1s (OpenSim) to 2.9s (MATLAB). For the closed-loop case, a proportional-integral-derivative controller was used to successfully balance a pole on model's hand despite random force disturbances on the pole. The new interface presented here not only integrates the OpenSim and MATLAB/Simulink software tools, but also will allow neuroscientists, physiologists, biomechanists, and physical therapists to adapt and generate new solutions as treatments for musculoskeletal conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sigurdson, J.; Tagerud, J.
1986-05-01
A UNIDO publication about machine tools with automatic control discusses the following: (1) numerical control (NC) machine tool perspectives, definition of NC, flexible manufacturing systems, robots and their industrial application, research and development, and sensors; (2) experience in developing a capability in NC machine tools; (3) policy issues; (4) procedures for retrieval of relevant documentation from data bases. Diagrams, statistics, bibliography are included.
Interferometric correction system for a numerically controlled machine
Burleson, Robert R.
1978-01-01
An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.
An intercomparison study of TSM, SEBS, and SEBAL using high-resolution imagery and lysimetric data
USDA-ARS?s Scientific Manuscript database
Over the past three decades, numerous remote sensing based ET mapping algorithms were developed. These algorithms provided a robust, economical, and efficient tool for ET estimations at field and regional scales. The Two Source Model (TSM), Surface Energy Balance System (SEBS), and Surface Energy Ba...
USDA-ARS?s Scientific Manuscript database
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on phosphorus (P) loadings from agricultural fields. However, tools that simulate both surface and subsurface P pathways are limited and have not been robustly evaluated in tile-drained...
Best predictors for postfire mortality of ponderosa pine trees in the Intermountain West
Carolyn Hull Sieg; Joel D. McMillin; James F. Fowler; Kurt K. Allen; Jose F. Negron; Linda L. Wadleigh; John A. Anhold; Ken E. Gibson
2006-01-01
Numerous wildfires in recent years have highlighted managers' needs for reliable tools to predict postfire mortality of ponderosa pine (Pinus ponderosa Dougl. ex Laws.) trees. General applicability of existing mortality models is uncertain, as researchers have used different sets of variables. We quantified tree attributes, crown and bole fire...
Handbook of Research on Hybrid Learning Models: Advanced Tools, Technologies, and Applications
ERIC Educational Resources Information Center
Wang, Fu Lee, Ed.; Fong, Joseph, Ed.; Kwan, Reggie, Ed.
2010-01-01
Hybrid learning is now the single-greatest trend in education today due to the numerous educational advantages when both traditional classroom learning and e-learning are implemented collectively. This handbook collects emerging research and pedagogies related to the convergence of teaching and learning methods. This significant "Handbook of…
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
NASA Astrophysics Data System (ADS)
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-01-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-27
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
Choi, Woo June; Qin, Wan; Chen, Chieh-Li; Wang, Jingang; Zhang, Qinqin; Yang, Xiaoqi; Gao, Bruce Z; Wang, Ruikang K
2016-07-01
Optical microangiography (OMAG) is a powerful optical angio-graphic tool to visualize micro-vascular flow in vivo. Despite numerous demonstrations for the past several years of the qualitative relationship between OMAG and flow, no convincing quantitative relationship has been proven. In this paper, we attempt to quantitatively correlate the OMAG signal with flow. Specifically, we develop a simplified analytical model of the complex OMAG, suggesting that the OMAG signal is a product of the number of particles in an imaging voxel and the decorrelation of OCT (optical coherence tomography) signal, determined by flow velocity, inter-frame time interval, and wavelength of the light source. Numerical simulation with the proposed model reveals that if the OCT amplitudes are correlated, the OMAG signal is related to a total number of particles across the imaging voxel cross-section per unit time (flux); otherwise it would be saturated but its strength is proportional to the number of particles in the imaging voxel (concentration). The relationship is validated using microfluidic flow phantoms with various preset flow metrics. This work suggests OMAG is a promising quantitative tool for the assessment of vascular flow.
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.
A suite of benchmark and challenge problems for enhanced geothermal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark; Fu, Pengcheng; McClure, Mark
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less
Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.
Tang, Dajun; Hefner, Brian T
2012-04-01
Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.
Numerical simulation of water injection into vapor-dominated reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.
1995-01-01
Water injection into vapor-dominated reservoirs is a means of condensate disposal, as well as a reservoir management tool for enhancing energy recovery and reservoir life. We review different approaches to modeling the complex fluid and heat flow processes during injection into vapor-dominated systems. Vapor pressure lowering, grid orientation effects, and physical dispersion of injection plumes from reservoir heterogeneity are important considerations for a realistic modeling of injection effects. An example of detailed three-dimensional modeling of injection experiments at The Geysers is given.
Computational Analysis of Material Flow During Friction Stir Welding of AA5059 Aluminum Alloys
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Pandurangan, B.; Ochterbeck, J. M.; Yen, C.-F.; Cheeseman, B. A.; Reynolds, A. P.; Sutton, M. A.
2012-09-01
Workpiece material flow and stirring/mixing during the friction stir welding (FSW) process are investigated computationally. Within the numerical model of the FSW process, the FSW tool is treated as a Lagrangian component while the workpiece material is treated as an Eulerian component. The employed coupled Eulerian/Lagrangian computational analysis of the welding process was of a two-way thermo-mechanical character (i.e., frictional-sliding/plastic-work dissipation is taken to act as a heat source in the thermal-energy balance equation) while temperature is allowed to affect mechanical aspects of the model through temperature-dependent material properties. The workpiece material (AA5059, solid-solution strengthened and strain-hardened aluminum alloy) is represented using a modified version of the classical Johnson-Cook model (within which the strain-hardening term is augmented to take into account for the effect of dynamic recrystallization) while the FSW tool material (AISI H13 tool steel) is modeled as an isotropic linear-elastic material. Within the analysis, the effects of some of the FSW key process parameters are investigated (e.g., weld pitch, tool tilt-angle, and the tool pin-size). The results pertaining to the material flow during FSW are compared with their experimental counterparts. It is found that, for the most part, experimentally observed material-flow characteristics are reproduced within the current FSW-process model.
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
Modal identification of spindle-tool unit in high-speed machining
NASA Astrophysics Data System (ADS)
Gagnol, Vincent; Le, Thien-Phu; Ray, Pascal
2011-10-01
The accurate knowledge of high-speed motorised spindle dynamic behaviour during machining is important in order to ensure the reliability of machine tools in service and the quality of machined parts. More specifically, the prediction of stable cutting regions, which is a critical requirement for high-speed milling operations, requires the accurate estimation of tool/holder/spindle set dynamic modal parameters. These estimations are generally obtained through Frequency Response Function (FRF) measurements of the non-rotating spindle. However, significant changes in modal parameters are expected to occur during operation, due to high-speed spindle rotation. The spindle's modal variations are highlighted through an integrated finite element model of the dynamic high-speed spindle-bearing system, taking into account rotor dynamics effects. The dependency of dynamic behaviour on speed range is then investigated and determined with accuracy. The objective of the proposed paper is to validate these numerical results through an experiment-based approach. Hence, an experimental setup is elaborated to measure rotating tool vibration during the machining operation in order to determine the spindle's modal frequency variation with respect to spindle speed in an industrial environment. The identification of natural frequencies of the spindle under rotating conditions is challenging, due to the low number of sensors and the presence of many harmonics in the measured signals. In order to overcome these issues and to extract the characteristics of the system, the spindle modes are determined through a 3-step procedure. First, spindle modes are highlighted using the Frequency Domain Decomposition (FDD) technique, with a new formulation at the considered rotating speed. These extracted modes are then analysed through the value of their respective damping ratios in order to separate the harmonics component from structural spindle natural frequencies. Finally, the stochastic properties of the modes are also investigated by considering the probability density of the retained modes. Results show a good correlation between numerical and experiment-based identified frequencies. The identified spindle-tool modal properties during machining allow the numerical model to be considered as representative of the real dynamic properties of the system.
Mars Exploration Rover Terminal Descent Mission Modeling and Simulation
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Queen, Eric M.
2004-01-01
Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.
Kim, Cheol-Hee; Park, Jin-Ho; Park, Cheol-Jin; Na, Jin-Gyun
2004-03-01
The Chemical Accidents Response Information System (CARIS) was developed at the Center for Chemical Safety Management in South Korea in order to track and predict the dispersion of hazardous chemicals in the case of an accident or terrorist attack involving chemical companies. The main objective of CARIS is to facilitate an efficient emergency response to hazardous chemical accidents by rapidly providing key information in the decision-making process. In particular, the atmospheric modeling system implemented in CARIS, which is composed of a real-time numerical weather forecasting model and an air pollution dispersion model, can be used as a tool to forecast concentrations and to provide a wide range of assessments associated with various hazardous chemicals in real time. This article introduces the components of CARIS and describes its operational modeling system. Some examples of the operational modeling system and its use for emergency preparedness are presented and discussed. Finally, this article evaluates the current numerical weather prediction model for Korea.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries
NASA Astrophysics Data System (ADS)
Zhang, Chao; Xu, Jun; Cao, Lei; Wu, Zenan; Santhanagopalan, Shriram
2017-07-01
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion and a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. The test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.
Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries
Zhang, Chao; Xu, Jun; Cao, Lei; ...
2017-05-05
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less
NASA Astrophysics Data System (ADS)
Remillieux, Marcel C.; Pasareanu, Stephanie M.; Svensson, U. Peter
2013-12-01
Exterior propagation of impulsive sound and its transmission through three-dimensional, thin-walled elastic structures, into enclosed cavities, are investigated numerically in the framework of linear dynamics. A model was developed in the time domain by combining two numerical tools: (i) exterior sound propagation and induced structural loading are computed using the image-source method for the reflected field (specular reflections) combined with an extension of the Biot-Tolstoy-Medwin method for the diffracted field, (ii) the fully coupled vibro-acoustic response of the interior fluid-structure system is computed using a truncated modal-decomposition approach. In the model for exterior sound propagation, it is assumed that all surfaces are acoustically rigid. Since coupling between the structure and the exterior fluid is not enforced, the model is applicable to the case of a light exterior fluid and arbitrary interior fluid(s). The structural modes are computed with the finite-element method using shell elements. Acoustic modes are computed analytically assuming acoustically rigid boundaries and rectangular geometries of the enclosed cavities. This model is verified against finite-element solutions for the cases of rectangular structures containing one and two cavities, respectively.
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Summary of FY15 results of benchmark modeling activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arguello, J. Guadalupe
2015-08-01
Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance ofmore » the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.« less
NASA Astrophysics Data System (ADS)
Schomer, Laura; Liewald, Mathias; Riedmüller, Kim Rouven
2018-05-01
Metal-ceramic Interpenetrating Phase Composites (IPC) belong to a special subcategory of composite materials and reveal enhanced properties compared to conventional composite materials. Currently, IPC are produced by infiltration of a ceramic open-pore body with liquid metal applying high pressure and I or high temperature to avoid residual porosity. However, these IPC are not able to gain their complete potential, because of structural damages and interface reactions occurring during the manufacturing process. Compared to this, the manufacturing of IPC using the semi-solid forming technology offers great perspectives due to relative low processing temperatures and reduced mechanical pressure. In this context, this paper is focusing on numerical investigations conducted by using the FLOW-3D software for gaining a deeper understanding of the infiltration of open-pore bodies with semi-solid materials. For flow simulation analysis, a geometric model and different porous media drag models have been used. They have been adjusted and compared to get a precise description of the infiltration process. Based on these fundamental numerical investigations, this paper also shows numerical investigations that were used for basically designing a semi-solid forming tool. Thereby, the development of the flow front and the pressure during the infiltration represent the basis of the evaluation. The use of an open and closed tool cavity combined with various geometries of the upper die shows different results relating to these evaluation arguments. Furthermore, different overflows were designed and its effects on the pressure at the end of the infiltration process were investigated. Thus, this paper provides a general guideline for a tool design for manufacturing of metal-ceramic IPC using semi-solid forming.
Etude par elements finis du comportement thermo-chimiomecanique de la pâte monolithique
NASA Astrophysics Data System (ADS)
Girard, Pierre-Luc
Aluminum industry is in a fierce international competition requiring the constant improvement of the electrolysis cell effectiveness and longevity. The selection of the cell's materials components becomes an important factor to increase the cell's life. The ramming paste, used to seal the cathode lining, is compacted in the joints between the cathode and the side wall of the cell. It is a complex thermo-chemo-reactive material whose proprieties change with the evolution of his baking level. Therefore, the objective of this project is to propose a thermo-chemo-mechanical constitutive law for the ramming paste and implement it in the finite element software ANSYSRTM. A constitutive model was first chosen from the available literature on the subject. It is a pressure dependent model that uses hardening, softening and baking mechanisms in its definition to mimic the behavior of carbon-based materials. Subsequently, the numerical tool was validated using the finite element toolbox FESh++, which contains the most representative carbon-based thermochimio- mechanical material constitutive law at this time. Finally, a validation of the experimental setup BERTA (Banc d'essai de resistance thermomecanique ALCAN) was made in prevision of a larger scale experimental validation of the constitutive law in a near future. However, the analysis of the results shows that BERTA is not suited to adequately measure the mechanical deformation of such kind of material. Following this project, the numerical tool will be used in numerical simulation to introduce the various effects of the baking of the ramming paste during the cell startup. This new tool will help the industrial partner to enhance the understanding of Hall-Heroult cell start-up and optimize this critical step.
A review of laboratory and numerical modelling in volcanology
NASA Astrophysics Data System (ADS)
Kavanagh, Janine L.; Engwell, Samantha L.; Martin, Simon A.
2018-04-01
Modelling has been used in the study of volcanic systems for more than 100 years, building upon the approach first applied by Sir James Hall in 1815. Informed by observations of volcanological phenomena in nature, including eye-witness accounts of eruptions, geophysical or geodetic monitoring of active volcanoes, and geological analysis of ancient deposits, laboratory and numerical models have been used to describe and quantify volcanic and magmatic processes that span orders of magnitudes of time and space. We review the use of laboratory and numerical modelling in volcanological research, focussing on sub-surface and eruptive processes including the accretion and evolution of magma chambers, the propagation of sheet intrusions, the development of volcanic flows (lava flows, pyroclastic density currents, and lahars), volcanic plume formation, and ash dispersal. When first introduced into volcanology, laboratory experiments and numerical simulations marked a transition in approach from broadly qualitative to increasingly quantitative research. These methods are now widely used in volcanology to describe the physical and chemical behaviours that govern volcanic and magmatic systems. Creating simplified models of highly dynamical systems enables volcanologists to simulate and potentially predict the nature and impact of future eruptions. These tools have provided significant insights into many aspects of the volcanic plumbing system and eruptive processes. The largest scientific advances in volcanology have come from a multidisciplinary approach, applying developments in diverse fields such as engineering and computer science to study magmatic and volcanic phenomena. A global effort in the integration of laboratory and numerical volcano modelling is now required to tackle key problems in volcanology and points towards the importance of benchmarking exercises and the need for protocols to be developed so that models are routinely tested against real world
data.
Rescriptive and Descriptive Gauge Symmetry in Finite-Dimensional Dynamical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurfil, Pini
2007-02-07
Gauge theories in physics constitute a fundamental tool for modeling interactions among electromagnetic, weak and strong forces. They have been used in a myriad of fields, ranging from sub-atomic physics to cosmology. The basic mathematical tool generating the gauge theories is that of symmetry, i.e. a redundancy in the description of the system. Although symmetries have long been recognized as a fundamental tool for solving ordinary differential equations, they have not been formally categorized as gauge theories. In this paper, we show how simple systems described by ordinary differential equations are prone to exhibit gauge symmetry, and discuss a fewmore » practical applications of this approach. In particular, we utilize the notion of gauge symmetry to question some common engineering misconceptions of chaotic and stochastic phenomena, and show that seemingly 'disordered' (deterministic) or 'random' (stochastic) behaviors can be 'ordered'. This brings into play the notion of observation; we show that temporal observations may be misleading when used for chaos detection. From a practical standpoint, we use gauge symmetry to considerably mitigate the numerical truncation error of numerical integrations.« less
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
Operational Ocean Modelling with the Harvard Ocean Prediction System
2008-11-01
tno.nl TNO-rapportnummer TNO-DV2008 A417 Opdrachtnummer Datum november 2008 Auteur (s) dr. F.P.A. Lam dr. ir. M.W. Schouten dr. L.A. te Raa...area of theory and implementation of numerical schemes and parameterizations, ocean models have grown from experimental tools to full-blown ocean...sound propagation through mesoscale features using 3-D coupled mode theory , Thesis, Naval Postgraduate School, Monterey, USA. 1992. [9] Robinson
Observation of plasmonic dipolar anti-bonding mode in silver nanoring structures.
Ye, Jian; Van Dorpe, Pol; Lagae, Liesbet; Maes, Guido; Borghs, Gustaaf
2009-11-18
We report on a clear experimental observation of the plasmonic dipolar anti-bonding resonance in silver nanorings. The data can be explained effectively by the plasmon hybridization model, which is confirmed by the numerical calculations of the electromagnetic field and surface charge distribution profiles. The experimental demonstration of the plasmon hybridization model indicates its usefulness as a valuable tool to understand, design and predict optical properties of metallic nanostructures.
Observation of plasmonic dipolar anti-bonding mode in silver nanoring structures
NASA Astrophysics Data System (ADS)
Ye, Jian; Van Dorpe, Pol; Lagae, Liesbet; Maes, Guido; Borghs, Gustaaf
2009-11-01
We report on a clear experimental observation of the plasmonic dipolar anti-bonding resonance in silver nanorings. The data can be explained effectively by the plasmon hybridization model, which is confirmed by the numerical calculations of the electromagnetic field and surface charge distribution profiles. The experimental demonstration of the plasmon hybridization model indicates its usefulness as a valuable tool to understand, design and predict optical properties of metallic nanostructures.
Analysis of Electrowetting Dynamics with Level Set Method
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
Electrowetting is a versatile tool to handle tiny droplets and forms a backbone of digital microfluidics. Numerical analysis is necessary to fully understand the dynamics of electrowetting, especially in designing electrowetting-based liquid lenses and reflective displays. We developed a numerical method to analyze the general contact-line problems, incorporating dynamic contact angle models. The method was applied to the analysis of spreading process of a sessile droplet for step input voltages in electrowetting. The result was compared with experimental data and analytical result which is based on the spectral method. It is shown that contact line friction significantly affects the contact line motion and the oscillation amplitude. The pinning process of contact line was well represented by including the hysteresis effect in the contact angle models.
Modeling radium and radon transport through soil and vegetation
Kozak, J.A.; Reeves, H.W.; Lewis, B.A.
2003-01-01
A one-dimensional flow and transport model was developed to describe the movement of two fluid phases, gas and water, within a porous medium and the transport of 226Ra and 222Rn within and between these two phases. Included in this model is the vegetative uptake of water and aqueous 226Ra and 222Rn that can be extracted from the soil via the transpiration stream. The mathematical model is formulated through a set of phase balance equations and a set of species balance equations. Mass exchange, sink terms and the dependence of physical properties upon phase composition couple the two sets of equations. Numerical solution of each set, with iteration between the sets, is carried out leading to a set-iterative compositional model. The Petrov-Galerkin finite element approach is used to allow for upstream weighting if required for a given simulation. Mass lumping improves solution convergence and stability behavior. The resulting numerical model was applied to four problems and was found to produce accurate, mass conservative solutions when compared to published experimental and numerical results and theoretical column experiments. Preliminary results suggest that the model can be used as an investigative tool to determine the feasibility of phytoremediating radium and radon-contaminated soil. ?? 2003 Elsevier Science B.V. All rights reserved.
Evaluation of a pumping test of the Snake River Plain aquifer using axial-flow numerical modeling
NASA Astrophysics Data System (ADS)
Johnson, Gary S.; Frederick, David B.; Cosgrove, Donna M.
2002-06-01
The Snake River Plain aquifer in southeast Idaho is hosted in a thick sequence of layered basalts and interbedded sediments. The degree to which the layering impedes vertical flow has not been well understood, yet is a feature that may exert a substantial control on the movement of contaminants. An axial-flow numerical model, RADFLOW, was calibrated to pumping test data collected by a straddle-packer system deployed at 23 depth intervals in four observation wells to evaluate conceptual models and estimate properties of the Snake River Plain aquifer at the Idaho National Engineering and Environmental Laboratory. A delayed water-table response observed in intervals beneath a sediment interbed was best reproduced with a three-layer simulation. The results demonstrate the hydraulic significance of this interbed as a semi-confining layer. Vertical hydraulic conductivity of the sediment interbed was estimated to be about three orders of magnitude less than vertical hydraulic conductivity of the lower basalt and upper basalt units. The numerical model was capable of representing aquifer conceptual models that could not be represented with any single analytical technique. The model proved to be a useful tool for evaluating alternative conceptual models and estimating aquifer properties in this application.
Computational and experimental model of transdermal iontophorethic drug delivery system.
Filipovic, Nenad; Saveljic, Igor; Rac, Vladislav; Graells, Beatriz Olalde; Bijelic, Goran
2017-11-30
The concept of iontophoresis is often applied to increase the transdermal transport of drugs and other bioactive agents into the skin or other tissues. It is a non-invasive drug delivery method which involves electromigration and electroosmosis in addition to diffusion and is shown to be a viable alternative to conventional administration routs such as oral, hypodermic and intravenous injection. In this study we investigated, experimentally and numerically, in vitro drug delivery of dexamethasone sodium phosphate to porcine skin. Different current densities, delivery durations and drug loads were investigated experimentally and introduced as boundary conditions for numerical simulations. Nernst-Planck equation was used for calculation of active substance flux through equivalent model of homogeneous hydrogel and skin layers. The obtained numerical results were in good agreement with experimental observations. A comprehensive in-silico platform, which includes appropriate numerical tools for fitting, could contribute to iontophoretic drug-delivery devices design and correct dosage and drug clearance profiles as well as to perform much faster in-silico experiments to better determine parameters and performance criteria of iontophoretic drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Skibinski, Jakub; Caban, Piotr; Wejrzanowski, Tomasz; Kurzydlowski, Krzysztof J.
2014-10-01
In the present study numerical simulations of epitaxial growth of gallium nitride in Metal Organic Vapor Phase Epitaxy reactor AIX-200/4RF-S is addressed. Epitaxial growth means crystal growth that progresses while inheriting the laminar structure and the orientation of substrate crystals. One of the technological problems is to obtain homogeneous growth rate over the main deposit area. Since there are many agents influencing reaction on crystal area such as temperature, pressure, gas flow or reactor geometry, it is difficult to design optimal process. According to the fact that it's impossible to determine experimentally the exact distribution of heat and mass transfer inside the reactor during crystal growth, modeling is the only solution to understand the process precisely. Numerical simulations allow to understand the epitaxial process by calculation of heat and mass transfer distribution during growth of gallium nitride. Including chemical reactions in numerical model allows to calculate the growth rate of the substrate and estimate the optimal process conditions for obtaining the most homogeneous product.
NASA Technical Reports Server (NTRS)
Statler, Irving C. (Editor)
2007-01-01
The Aviation System Monitoring and Modeling (ASMM) Project was one of the projects within NASA s Aviation Safety Program from 1999 through 2005. The objective of the ASMM Project was to develop the technologies to enable the aviation industry to undertake a proactive approach to the management of its system-wide safety risks. The ASMM Project entailed four interdependent elements: (1) Data Analysis Tools Development - develop tools to convert numerical and textual data into information; (2) Intramural Monitoring - test and evaluate the data analysis tools in operational environments; (3) Extramural Monitoring - gain insight into the aviation system performance by surveying its front-line operators; and (4) Modeling and Simulations - provide reliable predictions of the system-wide hazards, their causal factors, and their operational risks that may result from the introduction of new technologies, new procedures, or new operational concepts. This report is a documentation of the history of this highly successful project and of its many accomplishments and contributions to improved safety of the aviation system.
Transposons As Tools for Functional Genomics in Vertebrate Models.
Kawakami, Koichi; Largaespada, David A; Ivics, Zoltán
2017-11-01
Genetic tools and mutagenesis strategies based on transposable elements are currently under development with a vision to link primary DNA sequence information to gene functions in vertebrate models. By virtue of their inherent capacity to insert into DNA, transposons can be developed into powerful tools for chromosomal manipulations. Transposon-based forward mutagenesis screens have numerous advantages including high throughput, easy identification of mutated alleles, and providing insight into genetic networks and pathways based on phenotypes. For example, the Sleeping Beauty transposon has become highly instrumental to induce tumors in experimental animals in a tissue-specific manner with the aim of uncovering the genetic basis of diverse cancers. Here, we describe a battery of mutagenic cassettes that can be applied in conjunction with transposon vectors to mutagenize genes, and highlight versatile experimental strategies for the generation of engineered chromosomes for loss-of-function as well as gain-of-function mutagenesis for functional gene annotation in vertebrate models, including zebrafish, mice, and rats. Copyright © 2017 Elsevier Ltd. All rights reserved.
SimVascular: An Open Source Pipeline for Cardiovascular Simulation.
Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C
2017-03-01
Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.
NASA Astrophysics Data System (ADS)
Amarnath, N. S.; Pound, M. W.; Wolfire, M. G.
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 4 years. DIRT uses results from a number of numerical models of astrophysical processes, and has an AWT based user interface. DIRT has been refactored to decouple data representation from plotting and curve fitting. This makes it easier to add new kinds of astrophysical models, use the plotter in other applications, migrate the user interface to Swing components, and modify the user interface to add functionality (for example, SIRTF tools). DIRT is now an extension of two generic libraries, one of which manages data representation and caching, and the second of which manages plotting and curve fitting. This project is an example of refactoring with no impact on user interface, so the existing user community was not affected.
Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts
NASA Astrophysics Data System (ADS)
Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo
This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.
Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V
2018-02-01
An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.
NASA Astrophysics Data System (ADS)
Languy, Fabian; Vandenrijt, Jean-François; Saint-Georges, Philippe; Georges, Marc P.
2017-06-01
The manufacture of mirrors for space application is expensive and the requirements on the optical performance increase over years. To achieve higher performance, larger mirrors are manufactured but the larger the mirror the higher the sensitivity to temperature variation and therefore the higher the degradation of optical performances. To avoid the use of an expensive thermal regulation, we need to develop tools able to predict how optics behaves with thermal constraints. This paper presents the comparison between experimental surface mirror deformation and theoretical results from a multiphysics model. The local displacements of the mirror surface have been measured with the use of electronic speckle pattern interferometry (ESPI) and the deformation itself has been calculated by subtracting the rigid body motion. After validation of the mechanical model, experimental and numerical wave front errors are compared.
Numerical investigation of wind loads on an operating heliostat
NASA Astrophysics Data System (ADS)
Ghanadi, Farzin; Yu, Jeremy; Emes, Matthew; Arjomandi, Maziar; Kelso, Richard
2017-06-01
The velocity fluctuations within the atmospheric boundary layer (ABL) and the wind direction are two important parameters which affect the resulting loads on the heliostats. In this study, the drag force on a square heliostat within the ABL at different turbulence intensities is simulated. To this end, numerical analysis of the wind loads have been conducted by implementing the three-dimensional Embedded Large Eddy Simulation (ELES). The results prove that in contrast with other models which are too dissipative for highly turbulent flow, the present model can accurately predict boundary effects and calculate the peak loads on heliostat at different elevation angles and turbulence intensities. Therefore, it is recommended that the model is used as a tool to provide new information about the relationship between wind loads and turbulence structures within ABL such as vortex length scale.
NASA Astrophysics Data System (ADS)
Matiatos, Ioannis; Varouhakis, Emmanouil A.; Papadopoulou, Maria P.
2015-04-01
As the sustainable use of groundwater resources is a great challenge for many countries in the world, groundwater modeling has become a very useful and well established tool for studying groundwater management problems. Based on various methods used to numerically solve algebraic equations representing groundwater flow and contaminant mass transport, numerical models are mainly divided into Finite Difference-based and Finite Element-based models. The present study aims at evaluating the performance of a finite difference-based (MODFLOW-MT3DMS), a finite element-based (FEFLOW) and a hybrid finite element and finite difference (Princeton Transport Code-PTC) groundwater numerical models simulating groundwater flow and nitrate mass transport in the alluvial aquifer of Trizina region in NE Peloponnese, Greece. The calibration of groundwater flow in all models was performed using groundwater hydraulic head data from seven stress periods and the validation was based on a series of hydraulic head data for two stress periods in sufficient numbers of observation locations. The same periods were used for the calibration of nitrate mass transport. The calibration and validation of the three models revealed that the simulated values of hydraulic heads and nitrate mass concentrations coincide well with the observed ones. The models' performance was assessed by performing a statistical analysis of these different types of numerical algorithms. A number of metrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Bias, Nash Sutcliffe Model Efficiency (NSE) and Reliability Index (RI) were used allowing the direct comparison of models' performance. Spatiotemporal Kriging (STRK) was also applied using separable and non-separable spatiotemporal variograms to predict water table level and nitrate concentration at each sampling station for two selected hydrological stress periods. The predictions were validated using the respective measured values. Maps of water table level and nitrate concentrations were produced and compared with those obtained from groundwater and mass transport numerical models. Preliminary results showed similar efficiency of the spatiotemporal geostatistical method with the numerical models. However data requirements of the former model were significantly less. Advantages and disadvantages of the methods performance were analysed and discussed indicating the characteristics of the different approaches.
Srnec, R; Horák, Z; Sedláček, R; Sedlinská, M; Krbec, M; Nečas, A
2017-01-01
PURPOSE OF THE STUDY In developing new or modifying the existing surgical treatment methods of spine conditions an integral part of ex vivo experiments is the assessment of mechanical, kinematic and dynamic properties of created constructions. The aim of the study is to create an appropriately validated numerical model of canine cervical spine in order to obtain a tool for basic research to be applied in cervical spine surgeries. For this purpose, canine is a suitable model due to the occurrence of similar cervical spine conditions in some breeds of dogs and in humans. The obtained model can also be used in research and in clinical veterinary practice. MATERIAL AND METHODS In order to create a 3D spine model, the LightSpeed 16 (GE, Milwaukee, USA) multidetector computed tomography was used to scan the cervical spine of Doberman Pinscher. The data were transmitted to Mimics 12 software (Materialise HQ, Belgium), in which the individual vertebrae were segmented on CT scans by thresholding. The vertebral geometry was exported to Rhinoceros software (McNeel North America, USA) for modelling, and subsequently the specialised software Abaqus (Dassault Systemes, France) was used to analyse the response of the physiological spine model to external load by the finite element method (FEM). All the FEM based numerical simulations were considered as nonlinear contact statistic tasks. In FEM analyses, angles between individual spinal segments were monitored in dependence on ventroflexion/ /dorziflexion. The data were validated using the latero-lateral radiographs of cervical spine of large breed dogs with no evident clinical signs of cervical spine conditions. The radiographs within the cervical spine range of motion were taken at three different positions: in neutral position, in maximal ventroflexion and in maximal dorziflexion. On X-rays, vertebral inclination angles in monitored spine positions were measured and compared with the results obtain0ed from FEM analyses of the numerical model. RESULTS It is obvious from the results that the physiological spine model tested by the finite element method shows a very similar mechanical behaviour as the physiological canine spine. The biggest difference identified between the resulting values was reported in C6-C7 segment in dorsiflexion (Δφ = 5.95%), or in C4-C5 segment in ventroflexion (Δφ = -3.09%). CONCLUSIONS The comparisons between the mobility of cervical spine in ventroflexion/dorsiflexion on radiographs of the real models and the simulated numerical model by finite element method showed a high degree of results conformity with a minimal difference. Therefore, for future experiments the validated numerical model can be used as a tool of basic research on condition that the results of analyses carried out by finite element method will be affected only by an insignificant error. The computer model, on the other hand, is merely a simplified system and in comparison with the real situation cannot fully evaluate the dynamics of the action of forces in time, their variability, and also the individual effects of supportive skeletal tissues. Based on what has been said above, it is obvious that there is a need to exercise restraint in interpreting the obtained results. Key words: cervical spine, kinematics, numerical modelling, finite element method, canine.
NASA Astrophysics Data System (ADS)
Jougnot, D.; Roubinet, D.; Linde, N.; Irving, J.
2016-12-01
Quantifying fluid flow in fractured media is a critical challenge in a wide variety of research fields and applications. To this end, geophysics offers a variety of tools that can provide important information on subsurface physical properties in a noninvasive manner. Most geophysical techniques infer fluid flow by data or model differencing in time or space (i.e., they are not directly sensitive to flow occurring at the time of the measurements). An exception is the self-potential (SP) method. When water flows in the subsurface, an excess of charge in the pore water that counterbalances electric charges at the mineral-pore water interface gives rise to a streaming current and an associated streaming potential. The latter can be measured with the SP technique, meaning that the method is directly sensitive to fluid flow. Whereas numerous field experiments suggest that the SP method may allow for the detection of hydraulically active fractures, suitable tools for numerically modeling streaming potentials in fractured media do not exist. Here, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid-flow and associated self-potential problems in fractured domains. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods due to computational limitations. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.
First approximations in avalanche model validations using seismic information
NASA Astrophysics Data System (ADS)
Roig Lafon, Pere; Suriñach, Emma; Bartelt, Perry; Pérez-Guillén, Cristina; Tapia, Mar; Sovilla, Betty
2017-04-01
Avalanche dynamics modelling is an essential tool for snow hazard management. Scenario based numerical modelling provides quantitative arguments for decision-making. The software tool RAMMS (WSL Institute for Snow and Avalanche Research SLF) is one such tool, often used by government authorities and geotechnical offices. As avalanche models improve, the quality of the numerical results will depend increasingly on user experience on the specification of input (e.g. release and entrainment volumes, secondary releases, snow temperature and quality). New model developments must continue to be validated using real phenomena data, for improving performance and reliability. The avalanches group form University of Barcelona (RISKNAT - UB), has studied the seismic signals generated from avalanches since 1994. Presently, the group manages the seismic installation at SLF's Vallée de la Sionne experimental site (VDLS). At VDLS the recorded seismic signals can be correlated to other avalanche measurement techniques, including both advanced remote sensing methods (radars, videogrammetry) and obstacle based sensors (pressure, capacitance, optical sender-reflector barriers). This comparison between different measurement techniques allows the group to address the question if seismic analysis can be used alone, on more additional avalanche tracks, to gain insight and validate numerical avalanche dynamics models in different terrain conditions. In this study, we aim to add the seismic data as an external record of the phenomena, able to validate RAMMS models. The seismic sensors are considerable easy and cheaper to install than other physical measuring tools, and are able to record data from the phenomena in every atmospheric conditions (e.g. bad weather, low light, freezing make photography, and other kind of sensors not usable). With seismic signals, we record the temporal evolution of the inner and denser parts of the avalanche. We are able to recognize the approximate position of the flow in the slope, and make observations of the internal flow dynamics, especially flow regimes transitions, which depend on the slope-perpendicular energy fluxes induced by collisions at the basal boundary. The recorded data over several experimental seasons provide a catalogue of seismic data from different types and sizes of avalanches triggered at the VDLS experimental site. These avalanches are recorded also by the SLF instrumentation (FMCW radars, photography, photogrammetry, video, videogrammetry, pressure sensors). We select the best-quality avalanche data to model and establish comparisons. All this information allows us to calibrate parameters governing the internal energy fluxes, especially parameters governing the interaction of the avalanche with the incumbent snow cover. For the comparison between the seismic signal and the RAMMS models, we are focusing at the temporal evolution of the flow, trying to find the same arrival times of the front at the seismic sensor location in the avalanche path. We make direct quantitative comparisons between measurements and model outputs, using modelled flow height, normal stress, velocity, and pressure values, compared with the seismic signal, its envelope and its running spectrogram. In all cases, the first comparisons between the seismic signal and RAMMS outputs are very promising.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less
A numerical analysis on forming limits during spiral and concentric single point incremental forming
NASA Astrophysics Data System (ADS)
Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.
2017-01-01
Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Naiman, Cynthia
2006-01-01
The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.
Improve Problem Solving Skills through Adapting Programming Tools
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Serrurier, A.; De la Plata, J.-P.; Anquez, J.; Angelini, E. D.; Wiart, J.; Bloch, I.
2014-08-01
Fetal dosimetry studies require the development of accurate numerical 3D models of the pregnant woman and the fetus. This paper proposes a 3D articulated fetal growth model covering the main phases of pregnancy and a pregnant woman model combining the utero-fetal structures and a deformable non-pregnant woman body envelope. The structures of interest were automatically or semi-automatically (depending on the stage of pregnancy) segmented from a database of images and surface meshes were generated. By interpolating linearly between fetal structures, each one can be generated at any age and in any position. A method is also described to insert the utero-fetal structures in the maternal body. A validation of the fetal models is proposed, comparing a set of biometric measurements to medical reference charts. The usability of the pregnant woman model in dosimetry studies is also investigated, with respect to the influence of the abdominal fat layer.
Equation-free analysis of agent-based models and systematic parameter determination
NASA Astrophysics Data System (ADS)
Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.
2016-12-01
Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for configuring the system specific EF parameters.
Tabacu, Stefan
2015-01-01
In this paper, a methodology for the development and validation of a numerical model of the human head using generic procedures is presented. All steps required, starting with the model generation, model validation and applications will be discussed. The proposed model may be considered as a dual one due to its capabilities to switch from deformable to a rigid body according to the application's requirements. The first step is to generate the numerical model of the human head using geometry files or medical images. The required stiffness and damping for the elastic connection used for the rigid body model are identified by performing a natural frequency analysis. The presented applications for model validation are related to impact analysis. The first case is related to Nahum's (Nahum and Smith 1970) experiments pressure data being evaluated and a pressure map generated using the results from discrete elements. For the second case, the relative displacement between the brain and the skull is evaluated according to Hardy's (Hardy WH, Foster CD, Mason, MJ, Yang KH, King A, Tashman S. 2001.Investigation of head injury mechanisms using neutral density technology and high-speed biplanar X-ray. Stapp Car Crash J. 45:337-368, SAE Paper 2001-22-0016) experiments. The main objective is to validate the rigid model as a quick and versatile tool for acquiring the input data for specific brain analyses.
2011-01-01
Background Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Methods Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Results Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. Conclusion The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections. PMID:21943385
Space-weather assets developed by the French space-physics community
NASA Astrophysics Data System (ADS)
Rouillard, A. P.; Pinto, R. F.; Brun, A. S.; Briand, C.; Bourdarie, S.; Dudok De Wit, T.; Amari, T.; Blelly, P.-L.; Buchlin, E.; Chambodut, A.; Claret, A.; Corbard, T.; Génot, V.; Guennou, C.; Klein, K. L.; Koechlin, L.; Lavarra, M.; Lavraud, B.; Leblanc, F.; Lemorton, J.; Lilensten, J.; Lopez-Ariste, A.; Marchaudon, A.; Masson, S.; Pariat, E.; Reville, V.; Turc, L.; Vilmer, N.; Zucarello, F. P.
2016-12-01
We present a short review of space-weather tools and services developed and maintained by the French space-physics community. They include unique data from ground-based observatories, advanced numerical models, automated identification and tracking tools, a range of space instrumentation and interconnected virtual observatories. The aim of the article is to highlight some advances achieved in this field of research at the national level over the last decade and how certain assets could be combined to produce better space-weather tools exploitable by space-weather centres and customers worldwide. This review illustrates the wide range of expertise developed nationally but is not a systematic review of all assets developed in France.
NASA Astrophysics Data System (ADS)
Ki, Seo Jin; Ray, Chittaranjan
2015-03-01
A regional screening tool-which is useful in cases where few site-specific parameters are available for complex vadose zone models-assesses the leaching potential of pollutants to groundwater over large areas. In this study, the previous pesticide leaching tool used in Hawaii was revised to account for the release of new volatile organic compounds (VOCs) from the soil surface. The tool was modified to introduce expanded terms in the traditional pesticide ranking indices (i.e., retardation and attenuation factors), allowing the estimation of the leaching fraction of volatile chemicals based on recharge, soil, and chemical properties to be updated. Results showed that the previous tool significantly overestimated the mass fraction of VOCs leached through soils as the recharge rates increased above 0.001801 m/d. In contrast, the revised tool successfully delineated vulnerable areas to the selected VOCs based on two reference chemicals, a known leacher and non-leacher, which were determined in local conditions. The sensitivity analysis with the Latin-Hypercube-One-factor-At-a-Time method revealed that the new leaching tool was most sensitive to changes in the soil organic carbon sorption coefficient, fractional organic carbon content, and Henry's law constant; and least sensitive to parameters such as the bulk density, water content at field capacity, and particle density in soils. When the revised tool was compared to the analytical (STANMOD) and numerical (HYDRUS-1D) models as a susceptibility measure, it ranked particular VOCs well (e.g., benzene, carbofuran, and toluene) that were consistent with other two models under the given conditions. Therefore, the new leaching tool can be widely used to address intrinsic groundwater vulnerability to contamination of pesticides and VOCs, along with the DRASTIC method or similar Tier 1 models such as SCI-GROW and WIN-PST.
Visualization in mechanics: the dynamics of an unbalanced roller
NASA Astrophysics Data System (ADS)
Cumber, Peter S.
2017-04-01
It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical user interface for its numerical solution in MATLAB. This allows a student's focus to be on the influence of different parameters on the system dynamics. The simulation tool can be used as a dynamics demonstrator in a lecture or as an educational tool driven by the imagination of the student. By way of demonstration the simulation tool has been applied to a range of roller-pivot arm configurations. In addition, approximations to the equations of motion are explored and a second-order model is shown to be accurate for a limited range of parameters.
Numerical Investigation of Plasma Detachment in Magnetic Nozzle Experiments
NASA Technical Reports Server (NTRS)
Sankaran, Kamesh; Polzin, Kurt A.
2008-01-01
At present there exists no generally accepted theoretical model that provides a consistent physical explanation of plasma detachment from an externally-imposed magnetic nozzle. To make progress towards that end, simulation of plasma flow in the magnetic nozzle of an arcjet experiment is performed using a multidimensional numerical simulation tool that includes theoretical models of the various dispersive and dissipative processes present in the plasma. This is an extension of the simulation tool employed in previous work by Sankaran et al. The aim is to compare the computational results with various proposed magnetic nozzle detachment theories to develop an understanding of the physical mechanisms that cause detachment. An applied magnetic field topology is obtained using a magnetostatic field solver (see Fig. I), and this field is superimposed on the time-dependent magnetic field induced in the plasma to provide a self-consistent field description. The applied magnetic field and model geometry match those found in experiments by Kuriki and Okada. This geometry is modeled because there is a substantial amount of experimental data that can be compared to the computational results, allowing for validation of the model. In addition, comparison of the simulation results with the experimentally obtained plasma parameters will provide insight into the mechanisms that lead to plasma detachment, revealing how they scale with different input parameters. Further studies will focus on modeling literature experiments both for the purpose of additional code validation and to extract physical insight regarding the mechanisms driving detachment.
TiConverter: A training image converting tool for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael
2016-11-01
TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.
NASA Astrophysics Data System (ADS)
Daneshmend, L. K.; Pak, H. A.
1984-02-01
On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.
Finite Element Modelling and Analysis of Conventional Pultrusion Processes
NASA Astrophysics Data System (ADS)
Akishin, P.; Barkanov, E.; Bondarchuk, A.
2015-11-01
Pultrusion is one of many composite manufacturing techniques and one of the most efficient methods for producing fiber reinforced polymer composite parts with a constant cross-section. Numerical simulation is helpful for understanding the manufacturing process and developing scientific means for the pultrusion tooling design. Numerical technique based on the finite element method has been developed for the simulation of pultrusion processes. It uses the general purpose finite element software ANSYS Mechanical. It is shown that the developed technique predicts the temperature and cure profiles, which are in good agreement with those published in the open literature.
Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh
2010-02-01
Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.
Spray Cooling Processes for Space Applications
NASA Technical Reports Server (NTRS)
Kizito, John P.; VanderWal, Randy L.; Berger, Gordon; Tryggvason, Gretar
2004-01-01
The present paper reports ongoing work to develop numerical and modeling tools used to design efficient and effective spray cooling processes and to determine characteristic non-dimensional parametric dependence for practical fluids and conditions. In particular, we present data that will delineate conditions towards control of the impingement dynamics of droplets upon a heated substrate germane to practical situations.
The Polygonal Model: A Simple Representation of Biomolecules as a Tool for Teaching Metabolism
ERIC Educational Resources Information Center
Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo
2018-01-01
Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student.…
Bimolecular dynamics by computer analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.
1984-01-01
As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.
Integrated workflows for spiking neuronal network simulations
Antolík, Ján; Davison, Andrew P.
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902
Integrated workflows for spiking neuronal network simulations.
Antolík, Ján; Davison, Andrew P
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.
The MeqTrees software system and its use for third-generation calibration of radio interferometers
NASA Astrophysics Data System (ADS)
Noordam, J. E.; Smirnov, O. M.
2010-12-01
Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.
An efficient numerical method for solving the Boltzmann equation in multidimensions
NASA Astrophysics Data System (ADS)
Dimarco, Giacomo; Loubère, Raphaël; Narski, Jacek; Rey, Thomas
2018-01-01
In this paper we deal with the extension of the Fast Kinetic Scheme (FKS) (Dimarco and Loubère, 2013 [26]) originally constructed for solving the BGK equation, to the more challenging case of the Boltzmann equation. The scheme combines a robust and fast method for treating the transport part based on an innovative Lagrangian technique supplemented with conservative fast spectral schemes to treat the collisional operator by means of an operator splitting approach. This approach along with several implementation features related to the parallelization of the algorithm permits to construct an efficient simulation tool which is numerically tested against exact and reference solutions on classical problems arising in rarefied gas dynamic. We present results up to the 3 D × 3 D case for unsteady flows for the Variable Hard Sphere model which may serve as benchmark for future comparisons between different numerical methods for solving the multidimensional Boltzmann equation. For this reason, we also provide for each problem studied details on the computational cost and memory consumption as well as comparisons with the BGK model or the limit model of compressible Euler equations.
Optimal implicit 2-D finite differences to model wave propagation in poroelastic media
NASA Astrophysics Data System (ADS)
Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2016-08-01
Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.
Jahantigh, Nabi; Keshavarz, Ali; Mirzaei, Masoud
2015-01-01
The aim of this study is to determine optimum hybrid heating systems parameters, such as temperature, surface area of a radiant heater and vent area to have thermal comfort conditions. DOE, Factorial design method is used to determine the optimum values for input parameters. A 3D model of a virtual standing thermal manikin with real dimensions is considered in this study. Continuity, momentum, energy, species equations for turbulent flow and physiological equation for thermal comfort are numerically solved to study heat, moisture and flow field. K - ɛRNG Model is used for turbulence modeling and DO method is used for radiation effects. Numerical results have a good agreement with the experimental data reported in the literature. The effect of various combinations of inlet parameters on thermal comfort is considered. According to Pareto graph, some of these combinations that have significant effect on the thermal comfort require no more energy can be used as useful tools. A better symmetrical velocity distribution around the manikin is also presented in the hybrid system.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Linde, N.; Jougnot, D.; Irving, J.
2016-05-01
Numerous field experiments suggest that the self-potential (SP) geophysical method may allow for the detection of hydraulically active fractures and provide information about fracture properties. However, a lack of suitable numerical tools for modeling streaming potentials in fractured media prevents quantitative interpretation and limits our understanding of how the SP method can be used in this regard. To address this issue, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid flow and associated self-potential problems in fractured rock. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.
Numerical modeling of the agricultural-hydrologic system in Punjab, India
NASA Astrophysics Data System (ADS)
Nyblade, M.; Russo, T. A.; Zikatanov, L.; Zipp, K.
2017-12-01
The goal of food security for India's growing population is threatened by the decline in freshwater resources due to unsustainable water use for irrigation. The issue is acute in parts of Punjab, India, where small landholders produce a major quantity of India's food with declining groundwater resources. To further complicate this problem, other regions of the state are experiencing groundwater logging and salinization, and are reliant on canal systems for fresh water delivery. Due to the lack of water use records, groundwater consumption for this study is estimated with available data on crop yields, climate, and total canal water delivery. The hydrologic and agricultural systems are modeled using appropriate numerical methods and software. This is a state-wide hydrologic numerical model of Punjab that accounts for multiple aquifer layers, agricultural water demands, and interactions between the surface canal system and groundwater. To more accurately represent the drivers of agricultural production and therefore water use, we couple an economic crop optimization model with the hydrologic model. These tools will be used to assess and optimize crop choice scenarios based on farmer income, food production, and hydrologic system constraints. The results of these combined models can be used to further understand the hydrologic system response to government crop procurement policies and climate change, and to assess the effectiveness of possible water conservation solutions.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
ModFossa: A library for modeling ion channels using Python.
Ferneyhough, Gareth B; Thibealut, Corey M; Dascalu, Sergiu M; Harris, Frederick C
2016-06-01
The creation and simulation of ion channel models using continuous-time Markov processes is a powerful and well-used tool in the field of electrophysiology and ion channel research. While several software packages exist for the purpose of ion channel modeling, most are GUI based, and none are available as a Python library. In an attempt to provide an easy-to-use, yet powerful Markov model-based ion channel simulator, we have developed ModFossa, a Python library supporting easy model creation and stimulus definition, complete with a fast numerical solver, and attractive vector graphics plotting.
NASA Astrophysics Data System (ADS)
Tang, H.; McGuire, L.; Rengers, F. K.; Kean, J. W.; Staley, D. M.
2017-12-01
Wildfire significantly changes the hydrological characteristics of soil for a period of several years and increases the likelihood of flooding and debris flows during high-intensity rainfall in steep watersheds. Hazards related to post-fire flooding and debris flows increase as populations expand into mountainous areas that are susceptible to wildfire, post-wildfire flooding, and debris flows. However, our understanding of post-wildfire debris flows is limited due to a paucity of direct observations and measurements, partially due to the remote locations where debris flows tend to initiate. In these situations, numerical modeling becomes a very useful tool for studying post-wildfire debris flows. Research based on numerical modeling improves our understanding of the physical mechanisms responsible for the increase in erosion and consequent formation of debris flows in burned areas. In this contribution, we study changes in sediment transport efficiency with time since burning by combining terrestrial laser scanning (TLS) surveys of a hillslope burned during the 2016 Fish Fire with numerical modeling of overland flow and sediment transport. We also combine the numerical model with measurements of debris flow timing to explore relationships between post-wildfire rainfall characteristics, soil infiltration capacity, hillslope erosion, and debris flow initiation at the drainage basin scale. Field data show that an initial rill network developed on the hillslope, and became more efficient over time as the overall rill density decreased. Preliminary model results suggest that this can be achieved when flow driven detachment mechanisms dominate and raindrop-driven detachment is minimized. Results also provide insight into the hydrologic and geomorphic conditions that lead to debris flow initiation within recently burned areas.
Chalon, A; Favre, J; Piotrowski, B; Landmann, V; Grandmougin, D; Maureira, J-P; Laheurte, P; Tran, N
2018-06-01
Implantation of a Left Ventricular Assist Device (LVAD) may produce both excessive local tissue stress and resulting strain-induced tissue rupture that are potential iatrogenic factors influencing the success of the surgical attachment of the LVAD into the myocardium. By using a computational simulation compared to mechanical tests, we sought to investigate the characteristics of stress-induced suture material on porcine myocardium. Tensile strength experiments (n = 8) were performed on bulk left myocardium to establish a hyperelastic reduced polynomial constitutive law. Simultaneously, suture strength tests on left myocardium (n = 6) were performed with a standard tensile test setup. Experiments were made on bulk ventricular wall with a single U-suture (polypropylene 3-0) and a PTFE pledget. Then, a Finite Element simulation of a LVAD suture case was performed. Strength versus displacement behavior was compared between mechanical and numerical experiments. Local stress fields in the model were thus analyzed. A strong correlation between the experimental and the numerical responses was observed, validating the relevance of the numerical model. A secure damage limit of 100 kPa on heart tissue was defined from mechanical suture testing and used to describe numerical results. The impact of suture on heart tissue could be accurately determined through new parameters of numerical data (stress diffusion, triaxiality stress). Finally, an ideal spacing between sutures of 2 mm was proposed. Our computational model showed a reliable ability to provide and predict various local tissue stresses created by suture penetration into the myocardium. In addition, this model contributed to providing valuable information useful to design less traumatic sutures for LVAD implantation. Therefore, our computational model is a promising tool to predict and optimize LVAD myocardial suture. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hybrid Rocket Performance Prediction with Coupling Method of CFD and Thermal Conduction Calculation
NASA Astrophysics Data System (ADS)
Funami, Yuki; Shimada, Toru
The final purpose of this study is to develop a design tool for hybrid rocket engines. This tool is a computer code which will be used in order to investigate rocket performance characteristics and unsteady phenomena lasting through the burning time, such as fuel regression or combustion oscillation. When phenomena inside a combustion chamber, namely boundary layer combustion, are described, it is difficult to use rigorous models for this target. It is because calculation cost may be too expensive. Therefore simple models are required for this calculation. In this study, quasi-one-dimensional compressible Euler equations for flowfields inside a chamber and the equation for thermal conduction inside a solid fuel are numerically solved. The energy balance equation at the solid fuel surface is solved to estimate fuel regression rate. Heat feedback model is Karabeyoglu's model dependent on total mass flux. Combustion model is global single step reaction model for 4 chemical species or chemical equilibrium model for 9 chemical species. As a first step, steady-state solutions are reported.
Development of an Unstructured, Three-Dimensional Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph; Stern, Eric; Palmer, Grant; Muppidi, Suman; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. The extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries as well as multi-dimensional physics, which have been shown to be important in some scenarios and are not captured by one-dimensional models. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
Numerical modelling to assess maintenance strategy management options for a small tidal inlet
NASA Astrophysics Data System (ADS)
Shaeri, Saeed; Tomlinson, Rodger; Etemad-Shahidi, Amir; Strauss, Darrell
2017-03-01
Small tidal inlets are found to be more sensitive to anthropogenic alteration than their larger counterparts. Such alterations, although typically supported by technical design reports, sometimes require amendments or modification. One of the most suitable tools to conduct the necessary studies in this regard is numerical modelling, since the behaviour of the inlet system in response to proposed remedial actions, can easily be identified. In this paper, various alternative proposals are investigated to determine the most practical and viable option to mitigate the need for ongoing maintenance at a typical small, jettied tidal inlet. The main tool to investigate the alternatives is the hydro-sedimentological modelling of the inlet system, which was performed using the Delft3D software package. The proposed alternative entrance modifications were based upon structural alterations of the inlet system (such as a jetty extension or submerged weir) and non-structural scenarios (such as a change of the time of the dredging campaign or the deposition location of the dredged material). It was concluded that whilst a detailed study is inevitable in order to achieve a comprehensive design plan, based upon the results of this study the construction of a submerged weir at the entrance channel can satisfy the needs of most of the stakeholders, with justifiable costs over a longer period.
Kimel-Naor, Shani; Abboud, Shimon; Arad, Marina
2016-08-01
Osteoporosis is defined as bone microstructure deterioration resulting a decrease of bone's strength. Measured bone mineral density (BMD) constitutes the main tool for Osteoporosis diagnosis, management, and defines patient's fracture risk. In the present study, parametric electrical impedance tomography (pEIT) method was examined for monitoring BMD, using a computerized simulation model and preliminary real measurements. A numerical solver was developed to simulate surface potentials measured over a 3D computerized pelvis model. Varying cortical and cancellous BMD were simulated by changing bone conductivity and permittivity. Up to 35% and 16% change was found in the real and imaginary modules of the calculated potential, respectively, while BMD changes from 100% (normal) to 60% (Osteoporosis). Negligible BMD relative error was obtained with SNR>60 [dB]. Position changes errors indicate that for long term monitoring, measurement should be taken at the same geometrical configuration with great accuracy. The numerical simulations were compared to actual measurements that were acquired from a healthy male subject using a five electrodes belt bioimpedance device. The results suggest that pEIT may provide an inexpensive easy to use tool for frequent monitoring BMD in small clinics during pharmacological treatment, as a complementary method to DEXA test. Copyright © 2016. Published by Elsevier Ltd.
Interactive Visualization to Advance Earthquake Simulation
NASA Astrophysics Data System (ADS)
Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn
2008-04-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.
Numerical study of water residence time in the Yueqing Bay based on the eulerian approach
NASA Astrophysics Data System (ADS)
Ying, Chao; Li, Xinwen; Liu, Yong; Yao, Wenwei; Li, Ruijie
2018-05-01
The Yueqing Bay was a semi-enclosed bay located in the southeast of Zhejiang Province, China. Due to substantial anthropogenic influences since 1964, the water quality in the bay had deteriorated seriously. Thus urgent measures should be taken to protect the water body. In this study, a numerical model was calibrated for water surface elevation and tidal current from August 14 to August 26, 2011. Comparisons of observed and simulated data showed that the model reproduced the tidal range and phase and the variations of current at different periods fairly well. The calibrated model was then applied to investigate spatial flushing pattern of the bay by calculation of residence time. The results obtained from a series of model experiments demonstrated that the residence time increased from 10 day at the bay mouth to more than 70 day at the upper bay. The average residence time over the whole bay was 49.5 day. In addition, the adaptation of flushing homogeneity curve showed that the residence time in the bay varied smoothly. This study provides a numerical tool to quantify the transport timescale in Yueqing Bay and supports adaptive management of the bay by local authorities.
NASA Astrophysics Data System (ADS)
Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang
2017-03-01
Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-12-01
In order to evaluate the performance of operational forecast models in the Korea operational oceanographic system (KOOS) which has been developed by Korea Institute of Ocean Science and Technology (KIOST), a skill assessment (SA) tool has developed and provided multiple skill metrics including not only correlation and error skills by comparing predictions and observation but also pattern clustering with numerical models, satellite, and observation. The KOOS has produced 72 hours forecast information on atmospheric and hydrodynamic forecast variables of wind, pressure, current, tide, wave, temperature, and salinity at every 12 hours per day produced by operating numerical models such as WRF, ROMS, MOM5, WW-III, and SWAN and the SA has conducted to evaluate the forecasts. We have been operationally operated several kinds of numerical models such as WRF, ROMS, MOM5, MOHID, WW-III. Quantitative assessment of operational ocean forecast model is very important to provide accurate ocean forecast information not only to general public but also to support ocean-related problems. In this work, we propose a method of pattern clustering using machine learning method and GIS-based spatial analytics to evaluate spatial distribution of numerical models and spatial observation data such as satellite and HF radar. For the clustering, we use 10 or 15 years-long reanalysis data which was computed by the KOOS, ECMWF, and HYCOM to make best matching clusters which are classified physical meaning with time variation and then we compare it with forecast data. Moreover, for evaluating current, we develop extraction method of dominant flow and apply it to hydrodynamic models and HF radar's sea surface current data. By applying pattern clustering method, it allows more accurate and effective assessment of ocean forecast models' performance by comparing not only specific observation positions which are determined by observation stations but also spatio-temporal distribution of whole model areas. We believe that our proposed method will be very useful to examine and evaluate large amount of numerical modeling data as well as satellite data.
NASA Astrophysics Data System (ADS)
Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria
2010-05-01
Together with the building and maintenance of observational and data banking infrastructures - i.e. an integrated organization of coordinated sensor networks, in conjunction with connected data banks and efficient data retrieval tools - a strategic vision for bolstering the future development of geophysics in Europe should also address the essential issue of improving our current ability to model coherently the propagation of seismic waves across the European plate. This impacts on fundamental matters, such as correctly locating earthquakes, imaging detailed earthquake source properties, modeling ground shaking, inferring geodynamic processes. To this extent, we both need detailed imaging of shallow and deep earth structure, and accurate modeling of seismic waves by numerical methods. Our current abilities appear somewhat limited, but emerging technologies may enable soon a significant leap towards better accuracy and reliability. To contribute to this debate, we present here the state-of-the-art of knowledge of earth structure and numerical wave modeling in the European plate, as the result of a comprehensive study towards the definition of a continental-scale reference model. Our model includes a description of crustal structure (EPcrust) merging information deriving from previous studies - large-scale compilations, seismic prospection, receiver functions, inversion of surface wave dispersion measurements and Green functions from noise correlation. We use a simple description of crustal structure, with laterally-varying sediment and cristalline layers thickness, density, and seismic parameters. This a priori crustal model improves the overall fit to observed Bouguer anomaly maps over CRUST2.0. The new crustal model is then used as a constraint in the inversion for mantle shear wave speed, based on fitting Love and Rayleigh surface wave dispersion. The new mantle model sensibly improves over global S models in the imaging of shallow asthenospheric (slow) anomalies beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.
Baker, Ronald J.; Reilly, Timothy J.; Lopez, Anthony R.; Romanok, Kristin M.; Wengrowski, Edward W
2015-01-01
A screening tool for quantifying levels of concern for contaminants detected in monitoring wells on or near landfills to down-gradient receptors (streams, wetlands and residential lots) was developed and evaluated. The tool uses Quick Domenico Multi-scenario (QDM), a spreadsheet implementation of Domenico-based solute transport, to estimate concentrations of contaminants reaching receptors under steady-state conditions from a constant-strength source. Unlike most other available Domenico-based model applications, QDM calculates the time for down-gradient contaminant concentrations to approach steady state and appropriate dispersivity values, and allows for up to fifty simulations on a single spreadsheet. Sensitivity of QDM solutions to critical model parameters was quantified. The screening tool uses QDM results to categorize landfills as having high, moderate and low levels of concern, based on contaminant concentrations reaching receptors relative to regulatory concentrations. The application of this tool was demonstrated by assessing levels of concern (as defined by the New Jersey Pinelands Commission) for thirty closed, uncapped landfills in the New Jersey Pinelands National Reserve, using historic water-quality data from monitoring wells on and near landfills and hydraulic parameters from regional flow models. Twelve of these landfills are categorized as having high levels of concern, indicating a need for further assessment. This tool is not a replacement for conventional numerically-based transport model or other available Domenico-based applications, but is suitable for quickly assessing the level of concern posed by a landfill or other contaminant point source before expensive and lengthy monitoring or remediation measures are taken. In addition to quantifying the level of concern using historic groundwater-monitoring data, the tool allows for archiving model scenarios and adding refinements as new data become available.
NASA Astrophysics Data System (ADS)
Atangana, Abdon
2016-10-01
In order to describe more complex problems using the concept of fractional derivatives, we introduce in this paper the concept of fractional derivatives with orders. The new definitions are based upon the concept of power law together with the generalized Mittag-Leffler function. The first order is included in the power law function and the second one is in the generalized Mittag-Leffler function. Each order therefore plays an important role while modeling, for instance, problems with two layers with different properties. This is the case, for instance, in thermal science for a reaction diffusion within a media with two different layers with different properties. Another case is that of groundwater flowing within an aquifer where geological formation is formed with two layers with different properties. The paper presents new fractional operators that will open new doors for research and investigations in modeling real world problems. Some useful properties of the new operators are presented, in particular their relationship with existing integral transforms, namely the Laplace, Sumudu, Mellin and Fourier transforms. The numerical approximation of the new fractional operators are presented. We apply the new fractional operators on the model of groundwater plume with degradation and limited sorption and solve the new model numerically with some numerical simulations. The numerical simulation leaves no doubt in believing that the new fractional operators are powerfull mathematical tools able to portray complexes real world problems.
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
Temperature Measurement and Numerical Prediction in Machining Inconel 718
Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-01-01
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312
Comparisons between stellar models and reliability of the theoretical models
NASA Astrophysics Data System (ADS)
Lebreton, Yveline; Montalbán, Josefina
2010-07-01
The high quality of the asteroseismic data provided by space missions such as CoRoT (Michel et al. in The CoRoT Mission, ESA Spec. Publ. vol. 1306, p. 39, 2006) or expected from new operating missions such as Kepler (Christensen-Dalsgaard et al. in Commun. Asteroseismol. 150:350, 2007) requires the capacity of stellar evolution codes to provide accurate models whose numerical precision is better than the expected observational errors (i.e. below 0.1 μHz on the frequencies in the case of CoRoT). We present a review of some thorough comparisons of stellar models produced by different evolution codes, involved in the CoRoT/ESTA activities (Monteiro in Evolution and Seismic Tools for Stellar Astrophysics, 2009). We examine the numerical aspects of the computations as well as the effects of different implementations of the same physics on the global quantities, physical structure and oscillations properties of the stellar models. We also discuss a few aspects of the input physics.
Development of a Rolling Process Design Tool for Use in Improving Hot Roll Slab Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R; Becker, R; Rhee, M
2004-09-24
Lawrence Livermore National Laboratory participated in a U. S. Department of Energy/Office of Industrial Technology sponsored research project 'Development of a Rolling Process Design Tool for Use in Improving Hot Roll Slab Recovery', as a Cooperative Agreement TC-02028 with the Alcoa Technical Center (ATC). The objective of the joint project with Alcoa is to develop a numerical modeling capability to optimize the hot rolling process used to produce aluminum plate. Product lost in the rolling process and subsequent recycling, wastes resources consumed in the energy-intensive steps of remelting and reprocessing the ingot. The modeling capability developed by project partners willmore » be used to produce plate more efficiently and with reduced product loss.« less
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Fotia, Matthew L.; Hoke, John; Schauer, Fred
2015-01-01
A quasi-two-dimensional, computational fluid dynamic (CFD) simulation of a rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction and other simplifications yield rapidly converging, steady solutions. Viscous effects, and heat transfer effects are modeled using source terms. The effects of potential inlet flow reversals are modeled using boundary conditions. Results from the simulation are compared to measured data from an experimental RDE rig with a converging-diverging nozzle added. The comparison is favorable for the two operating points examined. The utility of the code as a performance optimization tool and a diagnostic tool are discussed.
Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Impact of eliminating fracture intersection nodes in multiphase compositional flow simulation
NASA Astrophysics Data System (ADS)
Walton, Kenneth M.; Unger, Andre J. A.; Ioannidis, Marios A.; Parker, Beth L.
2017-04-01
Algebraic elimination of nodes at discrete fracture intersections via the star-delta technique has proven to be a valuable tool for making multiphase numerical simulations more tractable and efficient. This study examines the assumptions of the star-delta technique and exposes its effects in a 3-D, multiphase context for advective and dispersive/diffusive fluxes. Key issues of relative permeability-saturation-capillary pressure (kr-S-Pc) and capillary barriers at fracture-fracture intersections are discussed. This study uses a multiphase compositional, finite difference numerical model in discrete fracture network (DFN) and discrete fracture-matrix (DFM) modes. It verifies that the numerical model replicates analytical solutions and performs adequately in convergence exercises (conservative and decaying tracer, one and two-phase flow, DFM and DFN domains). The study culminates in simulations of a two-phase laboratory experiment in which a fluid invades a simple fracture intersection. The experiment and simulations evoke different invading fluid flow paths by varying fracture apertures as oil invades water-filled fractures and as water invades air-filled fractures. Results indicate that the node elimination technique as implemented in numerical model correctly reproduces the long-term flow path of the invading fluid, but that short-term temporal effects of the capillary traps and barriers arising from the intersection node are lost.
NASA Astrophysics Data System (ADS)
Rigola, J.; Aljure, D.; Lehmkuhl, O.; Pérez-Segarra, C. D.; Oliva, A.
2015-08-01
The aim of this paper is to carry out a group of numerical experiments over the fluid flow through a valve reed, using the CFD&HT code TermoFluids, an unstructured and parallel object-oriented CFD code for accurate and reliable solving of industrial flows. Turbulent flow and its solution is a very complex problem due to there is a non-lineal interaction between viscous and inertial effects further complicated by their rotational nature, together with the three-dimensionality inherent in these types of flow and the non-steady state solutions. In this work, different meshes, geometrical conditions and LES turbulence models (WALE, VMS, QR and SIGMA) are tested and results compared. On the other hand, the fluid flow boundary conditions are obtained by means of the numerical simulation model of hermetic reciprocating compressors tool, NEST-compressor code. The numerical results presented are based on a specific geometry, where the valve gap opening percentage is 11% of hole diameter and Reynolds numbers given by the one-dimensional model is 4.22 × 105, with density meshes of approximately 8 million CVs. Geometrical aspects related with the orifice's shape and its influence on fluid flow behaviour and pressure drop are analysed in detail, furthermore, flow results for different valve openings are also studied.
On the hyperbolicity and stability of 3+1 formulations of metric f( R) gravity
NASA Astrophysics Data System (ADS)
Mongwane, Bishop
2016-11-01
3+1 formulations of the Einstein field equations have become an invaluable tool in Numerical relativity, having been used successfully in modeling spacetimes of black hole collisions, stellar collapse and other complex systems. It is plausible that similar considerations could prove fruitful for modified gravity theories. In this article, we pursue from a numerical relativistic viewpoint the 3+1 formulation of metric f( R) gravity as it arises from the fourth order equations of motion, without invoking the dynamical equivalence with Brans-Dicke theories. We present the resulting system of evolution and constraint equations for a generic function f( R), subject to the usual viability conditions. We confirm that the time propagation of the f( R) Hamiltonian and Momentum constraints take the same Mathematical form as in general relativity, irrespective of the f( R) model. We further recast the 3+1 system in a form akin to the BSSNOK formulation of numerical relativity. Without assuming any specific model, we show that the ADM version of f( R) is weakly hyperbolic and is plagued by similar zero speed modes as in the general relativity case. On the other hand the BSSNOK version is strongly hyperbolic and hence a promising formulation for numerical simulations in metric f( R) theories.
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
A Novel Cylindrical Representation for Characterizing Intrinsic Properties of Protein Sequences.
Yu, Jia-Feng; Dou, Xiang-Hua; Wang, Hong-Bo; Sun, Xiao; Zhao, Hui-Ying; Wang, Ji-Hua
2015-06-22
The composition and sequence order of amino acid residues are the two most important characteristics to describe a protein sequence. Graphical representations facilitate visualization of biological sequences and produce biologically useful numerical descriptors. In this paper, we propose a novel cylindrical representation by placing the 20 amino acid residue types in a circle and sequence positions along the z axis. This representation allows visualization of the composition and sequence order of amino acids at the same time. Ten numerical descriptors and one weighted numerical descriptor have been developed to quantitatively describe intrinsic properties of protein sequences on the basis of the cylindrical model. Their applications to similarity/dissimilarity analysis of nine ND5 proteins indicated that these numerical descriptors are more effective than several classical numerical matrices. Thus, the cylindrical representation obtained here provides a new useful tool for visualizing and charactering protein sequences. An online server is available at http://biophy.dzu.edu.cn:8080/CNumD/input.jsp .
Brown, Raymond J.
1977-01-01
The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.
MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms
NASA Technical Reports Server (NTRS)
Allred, Joel
2012-01-01
Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.
Hamm, V; Collon-Drouaillet, P; Fabriol, R
2008-02-19
The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more detailed information on flow and chemical behaviour (dissolved sulphate concentrations, remaining mass of solid sulphate) in the network. Nevertheless, both modelling methods require hydrological and chemical parameters (recharge flow rate, outflows, volume of mine voids, mass of solids, kinetic constants of the dissolution-precipitation reactions), which are commonly not available for a mine and therefore call for calibration data.
Multiphase and multiscale approaches for modelling the injection of textured moulds
NASA Astrophysics Data System (ADS)
Nakhoul, Rebecca; Laure, Patrice; Silva, Luisa; Vincent, Michel
2016-10-01
Micro-injection moulding is frequently used for the mass production of devices in micro-medical technologies, micro-optics and micro-mechanics. This work focuses mainly on offering numerical tools to model the injection of micro-textured moulds. Such tools can predict the different filling scenarios of the micro-details and consequently offer optimal operating conditions (mould and melt temperatures, melt flow, stresses, etc.) to analyse the final part quality. To do so, a full Eulerian approach is used to model the injection of textured moulds at both the macroscopic and microscopic scales as usual industrial software cannot handle the filling of micro details. Since heat transfers with the mould are very relevant due to high cooling rates, the coupling between micro- and macro- simulations is primordial to insure a complete and accurate representation of textured mould injection.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Large eddy simulation modeling of particle-laden flows in complex terrain
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.
2017-12-01
The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.
NASA Astrophysics Data System (ADS)
Papasotiriou, P. J.; Geroyannis, V. S.
We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.
Diffusion model to describe osteogenesis within a porous titanium scaffold.
Schmitt, M; Allena, R; Schouman, T; Frasca, S; Collombet, J M; Holy, X; Rouch, P
2016-01-01
In this study, we develop a two-dimensional finite element model, which is derived from an animal experiment and allows simulating osteogenesis within a porous titanium scaffold implanted in ewe's hemi-mandible during 12 weeks. The cell activity is described through diffusion equations and regulated by the stress state of the structure. We compare our model to (i) histological observations and (ii) experimental data obtained from a mechanical test done on sacrificed animal. We show that our mechano-biological approach provides consistent numerical results and constitutes a useful tool to predict osteogenesis pattern.
A path integral approach to asset-liability management
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann; Goovaerts, Marc
2006-05-01
Functional integrals constitute a powerful tool in the investigation of financial models. In the recent econophysics literature, this technique was successfully used for the pricing of a number of derivative securities. In the present contribution, we introduce this approach to the field of asset-liability management. We work with a representation of cash flows by means of a two-dimensional delta-function perturbation, in the case of a Brownian model and a geometric Brownian model. We derive closed-form solutions for a finite horizon ALM policy. The results are numerically and graphically illustrated.
NASA Astrophysics Data System (ADS)
Gotz, M.; Karsch, L.; Pawelke, J.
2017-11-01
In order to describe the volume recombination in a pulsed radiation field of high dose-per-pulse this study presents a numerical solution of a 1D transport model of the liberated charges in a plane-parallel ionization chamber. In addition, measurements were performed on an Advanced Markus ionization chamber in a pulsed electron beam to obtain suitable data to test the calculation. The experiment used radiation pulses of 4 μs duration and variable dose-per-pulse values up to about 1 Gy, as well as pulses of variable duration up to 308 μs at constant dose-per-pulse values between 85 mGy and 400 mGy. Those experimental data were compared to the developed numerical model and existing descriptions of volume recombination. At low collection voltages the observed dose-per-pulse dependence of volume recombination can be approximated by the existing theory using effective parameters. However, at high collection voltages large discrepancies are observed. The developed numerical model shows much better agreement with the observations and is able to replicate the observed behavior over the entire range of dose-per-pulse values and collection voltages. Using the developed numerical model, the differences between observation and existing theory are shown to be the result of a large fraction of the charge being collected as free electrons and the resultant distortion of the electric field inside the chamber. Furthermore, the numerical solution is able to calculate recombination losses for arbitrary pulse durations in good agreement with the experimental data, an aspect not covered by current theory. Overall, the presented numerical solution of the charge transport model should provide a more flexible tool to describe volume recombination for high dose-per-pulse values as well as for arbitrary pulse durations and repetition rates.
Gotz, M; Karsch, L; Pawelke, J
2017-11-01
In order to describe the volume recombination in a pulsed radiation field of high dose-per-pulse this study presents a numerical solution of a 1D transport model of the liberated charges in a plane-parallel ionization chamber. In addition, measurements were performed on an Advanced Markus ionization chamber in a pulsed electron beam to obtain suitable data to test the calculation. The experiment used radiation pulses of 4 μs duration and variable dose-per-pulse values up to about 1 Gy, as well as pulses of variable duration up to 308 [Formula: see text] at constant dose-per-pulse values between 85 mGy and 400 mGy. Those experimental data were compared to the developed numerical model and existing descriptions of volume recombination. At low collection voltages the observed dose-per-pulse dependence of volume recombination can be approximated by the existing theory using effective parameters. However, at high collection voltages large discrepancies are observed. The developed numerical model shows much better agreement with the observations and is able to replicate the observed behavior over the entire range of dose-per-pulse values and collection voltages. Using the developed numerical model, the differences between observation and existing theory are shown to be the result of a large fraction of the charge being collected as free electrons and the resultant distortion of the electric field inside the chamber. Furthermore, the numerical solution is able to calculate recombination losses for arbitrary pulse durations in good agreement with the experimental data, an aspect not covered by current theory. Overall, the presented numerical solution of the charge transport model should provide a more flexible tool to describe volume recombination for high dose-per-pulse values as well as for arbitrary pulse durations and repetition rates.
Finite Element Modeling, Simulation, Tools, and Capabilities at Superform
NASA Astrophysics Data System (ADS)
Raman, Hari; Barnes, A. J.
2010-06-01
Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.
NASA Astrophysics Data System (ADS)
Lin, S. Y.; Chung, C. T.; Cheng, Y. Y.
2011-01-01
The main objective of this study is to develop a thermo-elastic-plastic coupling model, based on a combination skill of ultrasonically assisted cutting and cryogenic cooling, under large deformation for Inconel 718 alloy machining process. The improvement extent on cutting performance and tool life promotion may be examined from this investigation. The critical value of the strain energy density of the workpiece will be utilized as the chip separation and the discontinuous chip segmentation criteria. The forced convection cooling and a hydrodynamic lubrication model will be considered and formulated in the model. Finite element method will be applied to create a complete numerical solution for this ultrasonic vibration cutting model. During the analysis, the cutting tool is incrementally advanced forward with superimposed ultrasonic vibration in a back and forth step-by-step manner, from an incipient stage of tool-workpiece engagement to a steady state of chip formation, a whole simulation of orthogonal cutting process under plane strain deformation is thus undertaken. High shear strength induces a fluctuation phenomenon of shear angle, high shear strain rate, variation of chip types and chip morphology, tool-chip contact length variation, the temperature distributions within the workpiece, chip and tool, periodic fluctuation in cutting forces can be determined from the developed model. A complete comparison of machining characteristics between some different combinations of ultrasonically assisted cutting and cryogenic cooling with conventional cutting operation can be acquired. Finally, the high-speed turning experiment for Inconel 718 alloy will be taken in the laboratory to validate the accuracy of the model, and the progressive flank wear, crater wear, notching and chipping of the tool edge can also be measured in the experiments.
Finite element analysis of hysteresis effects in piezoelectric transducers
NASA Astrophysics Data System (ADS)
Simkovics, Reinhard; Landes, Hermann; Kaltenbacher, Manfred; Hoffelner, Johann; Lerch, Reinhard
2000-06-01
The design of ultrasonic transducers for high power applications, e.g. in medical therapy or production engineering, asks for effective computer aided design tools to analyze the occurring nonlinear effects. In this paper the finite-element-boundary-element package CAPA is presented that allows to model different types of electromechanical sensors and actuators. These transducers are based on various physical coupling effects, such as piezoelectricity or magneto- mechanical interactions. Their computer modeling requires the numerical solution of a multifield problem, such as coupled electric-mechanical fields or magnetic-mechanical fields as well as coupled mechanical-acoustic fields. With the reported software environment we are able to compute the dynamic behavior of electromechanical sensors and actuators by taking into account geometric nonlinearities, nonlinear wave propagation and ferroelectric as well as magnetic material nonlinearities. After a short introduction to the basic theory of the numerical calculation schemes, two practical examples will demonstrate the applicability of the numerical simulation tool. As a first example an ultrasonic thickness mode transducer consisting of a piezoceramic material used for high power ultrasound production is examined. Due to ferroelectric hysteresis, higher order harmonics can be detected in the actuators input current. Also in case of electrical and mechanical prestressing a resonance frequency shift occurs, caused by ferroelectric hysteresis and nonlinear dependencies of the material coefficients on electric field and mechanical stresses. As a second example, a power ultrasound transducer used in HIFU-therapy (high intensity focused ultrasound) is presented. Due to the compressibility and losses in the propagating fluid a nonlinear shock wave generation can be observed. For both examples a good agreement between numerical simulation and experimental data has been achieved.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
Simulations of binary black hole mergers
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2017-01-01
Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.
Numerical Modeling Tools for the Prediction of Solution Migration Applicable to Mining Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martell, M.; Vaughn, P.
1999-01-06
Mining has always had an important influence on cultures and traditions of communities around the globe and throughout history. Today, because mining legislation places heavy emphasis on environmental protection, there is great interest in having a comprehensive understanding of ancient mining and mining sites. Multi-disciplinary approaches (i.e., Pb isotopes as tracers) are being used to explore the distribution of metals in natural environments. Another successful approach is to model solution migration numerically. A proven method to simulate solution migration in natural rock salt has been applied to project through time for 10,000 years the system performance and solution concentrations surroundingmore » a proposed nuclear waste repository. This capability is readily adaptable to simulate solution migration around mining.« less
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Numerical study of the geometry of the phase space of the Augmented Hill Three-Body problem
NASA Astrophysics Data System (ADS)
Farrés, Ariadna; Jorba, Àngel; Mondelo, Josep-Maria
2017-09-01
The Augmented Hill Three-Body problem is an extension of the classical Hill problem that, among other applications, has been used to model the motion of a solar sail around an asteroid. This model is a 3 degrees of freedom (3DoF) Hamiltonian system that depends on four parameters. This paper describes the bounded motions (periodic orbits and invariant tori) in an extended neighbourhood of some of the equilibrium points of the model. An interesting feature is the existence of equilibrium points with a 1:1 resonance, whose neighbourhood we also describe. The main tools used are the computation of periodic orbits (including their stability and bifurcations), the reduction of the Hamiltonian to centre manifolds at equilibria, and the numerical approximation of invariant tori. It is remarkable how the combination of these techniques allows the description of the dynamics of a 3DoF Hamiltonian system.
Simulation of two-dimensional turbulent flows in a rotating annulus
NASA Astrophysics Data System (ADS)
Storey, Brian D.
2004-05-01
Rotating water tank experiments have been used to study fundamental processes of atmospheric and geophysical turbulence in a controlled laboratory setting. When these tanks are undergoing strong rotation the forced turbulent flow becomes highly two dimensional along the axis of rotation. An efficient numerical method has been developed for simulating the forced quasi-geostrophic equations in an annular geometry to model current laboratory experiments. The algorithm employs a spectral method with Fourier series and Chebyshev polynomials as basis functions. The algorithm has been implemented on a parallel architecture to allow modelling of a wide range of spatial scales over long integration times. This paper describes the derivation of the model equations, numerical method, testing and performance of the algorithm. Results provide reasonable agreement with the experimental data, indicating that such computations can be used as a predictive tool to design future experiments.
Heat and Mass Transfer with Condensation in Capillary Porous Bodies
2014-01-01
The purpose of this present work is related to wetting process analysis caused by condensation phenomena in capillary porous material by using a numerical simulation. Special emphasis is given to the study of the mechanism involved and the evaluation of classical theoretical models used as a predictive tool. A further discussion will be given for the distribution of the liquid phase for both its pendular and its funicular state and its consequence on diffusion coefficients of the mathematical model used. Beyond the complexity of the interaction effects between vaporisation-condensation processes on the gas-liquid interfaces, the comparison between experimental and numerical simulations permits to identify the specific contribution and the relative part of mass and energy transport parameters. This analysis allows us to understand the contribution of each part of the mathematical model used and to simplify the study. PMID:24688366
Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor
NASA Technical Reports Server (NTRS)
Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus
2004-01-01
In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.
Advanced Computational Modeling of Vapor Deposition in a High-pressure Reactor
NASA Technical Reports Server (NTRS)
Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus
2004-01-01
In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.
Belcher, Wayne R.; Sweetkind, Donald S.; Faunt, Claudia C.; Pavelko, Michael T.; Hill, Mary C.
2017-01-19
Since the original publication of the Death Valley regional groundwater flow system (DVRFS) numerical model in 2004, more information on the regional groundwater flow system in the form of new data and interpretations has been compiled. Cooperators such as the Bureau of Land Management, National Park Service, U.S. Fish and Wildlife Service, the Department of Energy, and Nye County, Nevada, recognized a need to update the existing regional numerical model to maintain its viability as a groundwater management tool for regional stakeholders. The existing DVRFS numerical flow model was converted to MODFLOW-2005, updated with the latest available data, and recalibrated. Five main data sets were revised: (1) recharge from precipitation varying in time and space, (2) pumping data, (3) water-level observations, (4) an updated regional potentiometric map, and (5) a revision to the digital hydrogeologic framework model.The resulting DVRFS version 2.0 (v. 2.0) numerical flow model simulates groundwater flow conditions for the Death Valley region from 1913 to 2003 to correspond to the time frame for the most recently published (2008) water-use data. The DVRFS v 2.0 model was calibrated by using the Tikhonov regularization functionality in the parameter estimation and predictive uncertainty software PEST. In order to assess the accuracy of the numerical flow model in simulating regional flow, the fit of simulated to target values (consisting of hydraulic heads and flows, including evapotranspiration and spring discharge, flow across the model boundary, and interbasin flow; the regional water budget; values of parameter estimates; and sensitivities) was evaluated. This evaluation showed that DVRFS v. 2.0 simulates conditions similar to DVRFS v. 1.0. Comparisons of the target values with simulated values also indicate that they match reasonably well and in some cases (boundary flows and discharge) significantly better than in DVRFS v. 1.0.
ADOPT: A tool for automatic detection of tectonic plates at the surface of convection models
NASA Astrophysics Data System (ADS)
Mallard, C.; Jacquet, B.; Coltice, N.
2017-08-01
Mantle convection models with plate-like behavior produce surface structures comparable to Earth's plate boundaries. However, analyzing those structures is a difficult task, since convection models produce, as on Earth, diffuse deformation and elusive plate boundaries. Therefore we present here and share a quantitative tool to identify plate boundaries and produce plate polygon layouts from results of numerical models of convection: Automatic Detection Of Plate Tectonics (ADOPT). This digital tool operates within the free open-source visualization software Paraview. It is based on image segmentation techniques to detect objects. The fundamental algorithm used in ADOPT is the watershed transform. We transform the output of convection models into a topographic map, the crest lines being the regions of deformation (plate boundaries) and the catchment basins being the plate interiors. We propose two generic protocols (the field and the distance methods) that we test against an independent visual detection of plate polygons. We show that ADOPT is effective to identify the smaller plates and to close plate polygons in areas where boundaries are diffuse or elusive. ADOPT allows the export of plate polygons in the standard OGR-GMT format for visualization, modification, and analysis under generic softwares like GMT or GPlates.
Experimental validation of ultrasonic NDE simulation software
NASA Astrophysics Data System (ADS)
Dib, Gerges; Larche, Michael; Diaz, Aaron A.; Crawford, Susan L.; Prowant, Matthew S.; Anderson, Michael T.
2016-02-01
Computer modeling and simulation is becoming an essential tool for transducer design and insight into ultrasonic nondestructive evaluation (UT-NDE). As the popularity of simulation tools for UT-NDE increases, it becomes important to assess their reliability to model acoustic responses from defects in operating components and provide information that is consistent with in-field inspection data. This includes information about the detectability of different defect types for a given UT probe. Recently, a cooperative program between the Electrical Power Research Institute and the U.S. Nuclear Regulatory Commission was established to validate numerical modeling software commonly used for simulating UT-NDE of nuclear power plant components. In the first phase of this cooperative, extensive experimental UT measurements were conducted on machined notches with varying depth, length, and orientation in stainless steel plates. Then, the notches were modeled in CIVA, a semi-analytical NDE simulation platform developed by the French Commissariat a l'Energie Atomique, and their responses compared with the experimental measurements. Discrepancies between experimental and simulation results are due to either improper inputs to the simulation model, or to incorrect approximations and assumptions in the numerical models. To address the former, a variation study was conducted on the different parameters that are required as inputs for the model, specifically the specimen and transducer properties. Then, the ability of simulations to give accurate predictions regarding the detectability of the different defects was demonstrated. This includes the results in terms of the variations in defect amplitude indications, and the ratios between tip diffracted and specular signal amplitudes.
NASA Astrophysics Data System (ADS)
Aktan, A. Emin
2003-08-01
Although the interconnected systems nature of the infrastructures, and the complexity of interactions between their engineered, socio-technical and natural constituents have been recognized for some time, the principles of effectively operating, protecting and preserving such systems by taking full advantage of "modeling, simulations, optimization, control and decision making" tools developed by the systems engineering and operations research community have not been adequately studied or discussed by many engineers including the writer. Differential and linear equation systems, numerical and finite element modeling techniques, statistical and probabilistic representations are universal, however, different disciplines have developed their distinct approaches to conceptualizing, idealizing and modeling the systems they commonly deal with. The challenge is in adapting and integrating deterministic and stochastic, geometric and numerical, physics-based and "soft (data-or-knowledge based)", macroscopic or microscopic models developed by various disciplines for simulating infrastructure systems. There is a lot to be learned by studying how different disciplines have studied, improved and optimized the systems relating to various processes and products in their domains. Operations research has become a fifty-year old discipline addressing complex systems problems. Its mathematical tools range from linear programming to decision processes and game theory. These tools are used extensively in management and finance, as well as by industrial engineers for optimizing and quality control. Progressive civil engineering academic programs have adopted "systems engineering" as a focal area. However, most of the civil engineering systems programs remain focused on constructing and analyzing highly idealized, often generic models relating to the planning or operation of transportation, water or waste systems, maintenance management, waste management or general infrastructure hazards risk management. We further note that in the last decade there have been efforts for "agent-based" modeling of synthetic infrastructure systems by taking advantage of supercomputers at various DOE Laboratories. However, whether there is any similitude between such synthetic and actual systems needs investigating further.
NASA Astrophysics Data System (ADS)
Huang, Chien-Jung; White, Susan M.; Huang, Shao-Ching; Mallya, Sanjay; Eldredge, Jeff D.
2014-11-01
Obstructive sleep apnea(OSA) is a medical condition characterized by repetitive partial or complete occlusion of the airway during sleep. The soft tissues in the airway of OSA patients are prone to collapse under the low pressure loads incurred during breathing. The numerical simulation with patient-specific upper airway model can provide assistance for diagnosis and treatment assessment. The eventual goal of this research is the development of numerical tool for air-tissue interactions in the upper airway of patients with OSA. This tool is expected to capture collapse of the airway in respiratory flow conditions, as well as the effects of various treatment protocols. Here, we present our ongoing progress toward this goal. A sharp-interface embedded boundary method is used on Cartesian grids for resolving the air-tissue interface in the complex patient-specific airway geometries. For the structure simulation, a cut-cell FEM is used. Non-linear Green strains are used for properly resolving the large tissue displacements in the soft palate structures. The fluid and structure solvers are strongly coupled. Preliminary results will be shown, including flow simulation inside the 3D rigid upper airway of patients with OSA, and several validation problem for the fluid-structure coupling.
ERIC Educational Resources Information Center
BOLDT, MILTON; POKORNY, HARRY
THIRTY-THREE MACHINE SHOP INSTRUCTORS FROM 17 STATES PARTICIPATED IN AN 8-WEEK SEMINAR TO DEVELOP THE SKILLS AND KNOWLEDGE ESSENTIAL FOR TEACHING THE OPERATION OF NUMERICALLY CONTROLLED MACHINE TOOLS. THE SEMINAR WAS GIVEN FROM JUNE 20 TO AUGUST 12, 1966, WITH COLLEGE CREDIT AVAILABLE THROUGH STOUT STATE UNIVERSITY. THE PARTICIPANTS COMPLETED AN…
Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian
2016-01-01
The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.
CFD Methods and Tools for Multi-Element Airfoil Analysis
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; George, Michael W. (Technical Monitor)
1995-01-01
This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
NASA Astrophysics Data System (ADS)
Henderson, Michael
1997-08-01
The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.
Coeli M. Hoover; James E. Smith
2017-01-01
The focus on forest carbon estimation accompanying the implementation of increased regulatory and reporting requirements is fostering the development of numerous tools and methods to facilitate carbon estimation. One such well-established mechanism is via the Forest Vegetation Simulator (FVS), a growth and yield modeling system used by public and private land managers...
NASA Astrophysics Data System (ADS)
Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.
2016-12-01
Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.
Programming biological models in Python using PySB.
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.
Programming biological models in Python using PySB
Lopez, Carlos F; Muhlich, Jeremy L; Bachman, John A; Sorger, Peter K
2013-01-01
Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis. PMID:23423320
Choi, Woo June; Qin, Wan; Chen, Chieh-Li; Wang, Jingang; Zhang, Qinqin; Yang, Xiaoqi; Gao, Bruce Z.; Wang, Ruikang K.
2016-01-01
Optical microangiography (OMAG) is a powerful optical angio-graphic tool to visualize micro-vascular flow in vivo. Despite numerous demonstrations for the past several years of the qualitative relationship between OMAG and flow, no convincing quantitative relationship has been proven. In this paper, we attempt to quantitatively correlate the OMAG signal with flow. Specifically, we develop a simplified analytical model of the complex OMAG, suggesting that the OMAG signal is a product of the number of particles in an imaging voxel and the decorrelation of OCT (optical coherence tomography) signal, determined by flow velocity, inter-frame time interval, and wavelength of the light source. Numerical simulation with the proposed model reveals that if the OCT amplitudes are correlated, the OMAG signal is related to a total number of particles across the imaging voxel cross-section per unit time (flux); otherwise it would be saturated but its strength is proportional to the number of particles in the imaging voxel (concentration). The relationship is validated using microfluidic flow phantoms with various preset flow metrics. This work suggests OMAG is a promising quantitative tool for the assessment of vascular flow. PMID:27446700
NUMERICAL STUDY OF ELECTROMAGNETIC WAVES GENERATED BY A PROTOTYPE DIELECTRIC LOGGING TOOL
To understand the electromagnetic waves generated by a prototype dielectric logging tool, a
numerical study was conducted using both the finite-difference, time-domain method and a frequency- wavenumber method. When the propagation velocity in the borehole was greater than th...