Sample records for simulation problem analysis

  1. Analysis, preliminary design and simulation systems for control-structure interaction problems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.

    1991-01-01

    Software aspects of control-structure interaction (CSI) analysis are discussed. The following subject areas are covered: (1) implementation of a partitioned algorithm for simulation of large CSI problems; (2) second-order discrete Kalman filtering equations for CSI simulations; and (3) parallel computations and control of adaptive structures.

  2. Simulation and Analysis of Converging Shock Wave Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Shashkov, Mikhail J.

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less

  3. Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.F.; Buhl, W.F.

    1988-07-15

    The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed andmore » an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.« less

  4. An Analysis of Collaborative Problem-Solving Activities Mediated by Individual-Based and Collaborative Computer Simulations

    ERIC Educational Resources Information Center

    Chang, C.-J.; Chang, M.-H.; Liu, C.-C.; Chiu, B.-C.; Fan Chiang, S.-H.; Wen, C.-T.; Hwang, F.-K.; Chao, P.-Y.; Chen, Y.-L.; Chai, C.-S.

    2017-01-01

    Researchers have indicated that the collaborative problem-solving space afforded by the collaborative systems significantly impact the problem-solving process. However, recent investigations into collaborative simulations, which allow a group of students to jointly manipulate a problem in a shared problem space, have yielded divergent results…

  5. Paper simulation techniques in user requirements analysis for interactive computer systems

    NASA Technical Reports Server (NTRS)

    Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.

    1979-01-01

    This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task

  6. A Simulated Research Problem for Undergraduate Metamorphic Petrology.

    ERIC Educational Resources Information Center

    Amenta, Roddy V.

    1984-01-01

    Presents a laboratory problem in metamorphic petrology designed to simulate a research experience. The problem deals with data on scales ranging from a geologic map to hand specimens to thin sections. Student analysis includes identifying metamorphic index minerals, locating their isograds on the map, and determining the folding sequence. (BC)

  7. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  8. Design and Application of Interactive Simulations in Problem-Solving in University-Level Physics Education

    NASA Astrophysics Data System (ADS)

    Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel

    2016-08-01

    In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.

  9. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  10. Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations

    NASA Astrophysics Data System (ADS)

    Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans

    2017-01-01

    Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.

  11. Finite element analysis (FEA) analysis of the preflex beam

    NASA Astrophysics Data System (ADS)

    Wan, Lijuan; Gao, Qilang

    2017-10-01

    The development of finite element analysis (FEA) has been relatively mature, and is one of the important means of structural analysis. This method changes the problem that the research of complex structure in the past needs to be done by a large number of experiments. Through the finite element method, the numerical simulation of the structure can be used to achieve a variety of static and dynamic simulation analysis of the mechanical problems, it is also convenient to study the parameters of the structural parameters. Combined with a certain number of experiments to verify the simulation model can be completed in the past all the needs of experimental research. The nonlinear finite element method is used to simulate the flexural behavior of the prestressed composite beams with corrugated steel webs. The finite element analysis is used to understand the mechanical properties of the structure under the action of bending load.

  12. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  13. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell

    2016-11-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically, it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and howmore » to apply Dakota to a simulation problem.« less

  14. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  15. Traffic Flow Density Distribution Based on FEM

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Cui, Jianming

    In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.

  16. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  17. Integrating Problem-Based Learning and Simulation: Effects on Student Motivation and Life Skills.

    PubMed

    Roh, Young Sook; Kim, Sang Suk

    2015-07-01

    Previous research has suggested that a teaching strategy integrating problem-based learning and simulation may be superior to traditional lecture. The purpose of this study was to assess learner motivation and life skills before and after taking a course involving problem-based learning and simulation. The design used repeated measures with a convenience sample of 83 second-year nursing students who completed the integrated course. Data from a self-administered questionnaire measuring learner motivation and life skills were collected at pretest, post-problem-based learning, and post-simulation time points. Repeated-measures analysis of variance determined that the mean scores for total learner motivation (F=6.62, P=.003), communication (F=8.27, P<.001), problem solving (F=6.91, P=.001), and self-directed learning (F=4.45, P=.016) differed significantly between time points. Post hoc tests using the Bonferroni correction revealed that total learner motivation and total life skills significantly increased both from pretest to postsimulation and from post-problem-based learning test to postsimulation test. Subscales of learner motivation and life skills, intrinsic goal orientation, self-efficacy for learning and performance, problem-solving skills, and self-directed learning skills significantly increased both from pretest to postsimulation test and from post-problem-based learning test to post-simulation test. The results demonstrate that an integrating problem-based learning and simulation course elicits significant improvement in learner motivation and life skills. Simulation plus problem-based learning is more effective than problem-based learning alone at increasing intrinsic goal orientation, task value, self-efficacy for learning and performance, problem solving, and self-directed learning.

  18. Analysis of the Space Propulsion System Problem Using RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    diego mandelli; curtis smith; cristian rabiti

    This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less

  19. Simulation of Neural Firing Dynamics: A Student Project.

    ERIC Educational Resources Information Center

    Kletsky, E. J.

    This paper describes a student project in digital simulation techniques that is part of a graduate systems analysis course entitled Biosimulation. The students chose different simulation techniques to solve a problem related to the neuron model. (MLH)

  20. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  1. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  2. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  4. The simulators: truth and power in the psychiatry of José Ingenieros.

    PubMed

    Caponi, Sandra

    2016-01-01

    Using Michel Foucault's lectures on "Psychiatric power" as its starting point, this article analyzes the book Simulación de la locura (The simulation of madness), published in 1903 by the Argentine psychiatrist José Ingenieros. Foucault argues that the problem of simulation permeates the entire history of modern psychiatry. After initial analysis of José Ingenieros's references to the question of simulation in the struggle for existence, the issue of simulation in pathological states in general is examined, and lastly the simulation of madness and the problem of degeneration. Ingenieros participates in the epistemological and political struggle that took place between experts-psychiatrists and simulators over the question of truth.

  5. Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions

    ERIC Educational Resources Information Center

    Pokropek, Artur

    2015-01-01

    This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…

  6. Convergence of sampling in protein simulations

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2002-03-01

    With molecular dynamics protein dynamics can be simulated in atomic detail. Current computers are not fast enough to probe all available conformations, but fluctuations around one conformation can be sampled to a reasonable extent. The motions with the largest fluctuations can be filtered out of a simulation using covariance or principal component analysis. A problem with this analysis is that random diffusion can appear as correlated motion. An analysis is presented of how long a simulation should be to obtain relevant results for global motions. The analysis reveals that the cosine content of the principal components is a good indicator for bad sampling.

  7. JASMINE simulator

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Gouda, N.; Yano, T.; Sako, N.; Hatsutori, Y.; Tanaka, T.; Yamauchi, M.

    We explain simulation tools in JASMINE project(JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations of error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented(OO) methodologies are ideal tools for the simulation system of JASMINE(the JASMINE simulator). The simulation system should include all objects in JASMINE such as observation techniques, models of instruments and bus design, orbit, data transfer, data analysis etc. in order to resolve all issues which can be expected beforehand and make it easy to cope with some unexpected problems which might occur during the mission of JASMINE. So, the JASMINE Simulator is designed as handling events such as photons from astronomical objects, control signals for devices, disturbances for satellite attitude, by instruments such as mirrors and detectors, successively. The simulator is also applied to the technical demonstration "Nano-JASMINE". The accuracy of ordinary sensor is not enough for initial phase attitude control. Mission instruments may be a good sensor for this purpose. The problem of attitude control in initial phase is a good example of this software because the problem is closely related to both mission instruments and satellite bus systems.

  8. JASMINE Simulator

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Suganuma, M.; Tsujimoto, T.; Sako, N.; Hatsutori, Y.; Tanaka, T.

    2006-08-01

    We explain simulation tools in JASMINE project (JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations of error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented (OO) methodologies are ideal tools for the simulation system of JASMINE (the JASMINE simulator). The simulation system should include all objects in JASMINE such as observation techniques, models of instruments and bus design, orbit, data transfer, data analysis etc. in order to resolve all issues which can be expected beforehand and make it easy to cope with some unexpected problems which might occur during the mission of JASMINE. So, the JASMINE Simulator is designed as handling events such as photons from astronomical objects, control signals for devices, disturbances for satellite attitude, by instruments such as mirrors and detectors, successively. The simulator is also applied to the technical demonstration "Nano-JASMINE". The accuracy of ordinary sensor is not enough for initial phase attitude control. Mission instruments may be a good sensor for this purpose. The problem of attitude control in initial phase is a good example of this software because the problem is closely related to both mission instruments and satellite bus systems.

  9. Early Design Choices: Capture, Model, Integrate, Analyze, Simulate

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2004-01-01

    I. Designs are constructed incrementally to meet requirements and solve problems: a) Requirements types: objectives, scenarios, constraints, ilities. etc. b) Problem/issue types: risk/safety, cost/difficulty, interaction, conflict, etc. II. Capture requirements, problems and solutions: a) Collect design and analysis products and make them accessible for integration and analysis; b) Link changes in design requirements, problems and solutions; and c) Harvest design data for design models and choice structures. III. System designs are constructed by multiple groups designing interacting subsystems a) Diverse problems, choice criteria, analysis methods and point solutions. IV. Support integration and global analysis of repercussions: a) System implications of point solutions; b) Broad analysis of interactions beyond totals of mass, cost, etc.

  10. The Effects of Stress on Pilot Judgment in a MIDIS Simulator

    DTIC Science & Technology

    1989-02-01

    stress were relatively independent of problem demands for working memory and knowledge. Keywords: Decision making; Stress psychology; Pilot judgment; Divided attention; Cognitive task analysis ; Flight simulators.

  11. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  12. Selected bibliography on the modeling and control of plant processes

    NASA Technical Reports Server (NTRS)

    Viswanathan, M. M.; Julich, P. M.

    1972-01-01

    A bibliography of information pertinent to the problem of simulating plants is presented. Detailed simulations of constituent pieces are necessary to justify simple models which may be used for analysis. Thus, this area of study is necessary to support the Earth Resources Program. The report sums up the present state of the problem of simulating vegetation. This area holds the hope of major benefits to mankind through understanding the ecology of a region and in improving agricultural yield.

  13. Solving the forward problem of magnetoacoustic tomography with magnetic induction by means of the finite element method

    NASA Astrophysics Data System (ADS)

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2009-05-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.

  14. Solving the Forward Problem of Magnetoacoustic Tomography with Magnetic Induction by Means of the Finite Element Method

    PubMed Central

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2010-01-01

    Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978

  15. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  16. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  17. The analysis of delays in simulator digital computing systems. Volume 1: Formulation of an analysis approach using a central example simulator model

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.

    1980-01-01

    The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.

  18. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  19. Lee-Yang zero analysis for the study of QCD phase structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejiri, Shinji

    2006-03-01

    We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less

  20. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    NASA Technical Reports Server (NTRS)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  1. Application of multiphase modelling for vortex occurrence in vertical pump intake - a review

    NASA Astrophysics Data System (ADS)

    Samsudin, M. L.; Munisamy, K. M.; Thangaraju, S. K.

    2015-09-01

    Vortex formation within pump intake is one of common problems faced for power plant cooling water system. This phenomenon, categorised as surface and sub-surface vortices, can lead to several operational problems and increased maintenance costs. Physical model study was recommended from published guidelines but proved to be time and resource consuming. Hence, the use of Computational Fluid Dynamics (CFD) is an attractive alternative in managing the problem. At the early stage, flow analysis was conducted using single phase simulation and found to find good agreement with the observation from physical model study. With the development of computers, multiphase simulation found further enhancement in obtaining accurate results for representing air entrainment and sub-surface vortices which were earlier not well predicted from the single phase simulation. The purpose of this paper is to describe the application of multiphase modelling with CFD analysis for investigating vortex formation for a vertically inverted pump intake. In applying multiphase modelling, there ought to be a balance between the acceptable usage for computational time and resources and the degree of accuracy and realism in the results as expected from the analysis.

  2. User guidelines and best practices for CASL VUQ analysis using Dakota.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Swiler, Laura Painton; Hooper, Russell

    2014-03-01

    Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and howmore » to apply Dakota to a simulation problem. This SAND report constitutes the product of CASL milestone L3:VUQ.V&V.P8.01 and is also being released as a CASL unlimited release report with number CASL-U-2014-0038-000.« less

  3. Hygrothermal Simulation: A Tool for Building Envelope Design Analysis

    Treesearch

    Samuel V. Glass; Anton TenWolde; Samuel L. Zelinka

    2013-01-01

    Is it possible to gauge the risk of moisture problems while designing the building envelope? This article provides a brief introduction to computer-based hygrothermal (heat and moisture) simulation, shows how simulation can be useful as a design tool, and points out a number of im-portant considerations regarding model inputs and limita-tions. Hygrothermal simulation...

  4. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  5. Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application

    DTIC Science & Technology

    2016-02-26

    Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in

  6. Simulation and Analysis of One-time Forming Process of Automobile Steering Ball Head

    NASA Astrophysics Data System (ADS)

    Shi, Peicheng; Zhang, Xujun; Xu, Zengwei; Zhang, Rongyun

    2018-03-01

    Aiming at the problems such as large machining allowance, low production efficiency and material waste during die forging of ball pin, the cold extrusion process of ball head was studied and the analog simulation of the forming process was carried out by using the finite element analysis software DEFORM-3D. Through the analysis of the equivalent stress strain, velocity vector field and load-displacement curve, the flow regularity of the metal during the cold extrusion process of ball pin was clarified, and possible defects during the molding were predicted. The results showed that this process could solve the forming problem of ball pin and provide theoretical basis for actual production of enterprises.

  7. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  8. A Markov Model Analysis of Problem-Solving Progress.

    ERIC Educational Resources Information Center

    Vendlinski, Terry

    This study used a computerized simulation and problem-solving tool along with artificial neural networks (ANN) as pattern recognizers to identify the common types of strategies high school and college undergraduate chemistry students would use to solve qualitative chemistry problems. Participants were 134 high school chemistry students who used…

  9. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  10. Cognitive, Social, and Literacy Competencies: The Chelsea Bank Simulation Project. Year One: Final Report. [Volume 2]: Appendices.

    ERIC Educational Resources Information Center

    Duffy, Thomas; And Others

    This supplementary volume presents appendixes A-E associated with a 1-year study which determined what secondary school students were doing as they engaged in the Chelsea Bank computer software simulation activities. Appendixes present the SCANS Analysis Coding Sheet; coding problem analysis of 50 video segments; student and teacher interview…

  11. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  12. Exploring Simulator Use in the Preparation of Chemical Engineers

    ERIC Educational Resources Information Center

    Yerrick, Randy; Lund, Carl; Lee, Yonghee

    2013-01-01

    In this manuscript, we report the impact of students' usage of a simulator in the preparation of chemical engineers. This case study was conducted using content pretest and posttests, survey questionnaires, interviews, classroom observations, and an analysis of students' written response to design problems. Results showed the use of simulator was…

  13. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  14. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics.

    PubMed

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  15. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  16. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  17. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  18. Prevalence of DSM-IV major depression among U.S. military personnel: Meta-analysis and simulation

    PubMed Central

    Gadermann, Anne M.; Engel, COL Charles C.; Naifeh, James A.; Nock, Matthew K.; Petukhova, Maria; Santiago, LCDR Patcho N.; Benjamin, Wu; Zaslavsky, Alan M.; Kessler, Ronald C.

    2014-01-01

    A meta-analysis of 25 epidemiological studies estimated the prevalence of recent DSM-IV major depression among U.S. military personnel. Best estimates of recent prevalence (standard error) were 12.0 percent (1.2) among currently deployed, 13.1 percent (1.8) among previously deployed and 5.7 percent (1.2) among never deployed. Consistent correlates of prevalence were being female, enlisted, young (ages 17 to 25), unmarried and having less than a college education. Simulation of data from a national general population survey was used to estimate expected lifetime prevalence of major depression among respondents with the socio-demographic profile and none of the enlistment exclusions of Army personnel. In this simulated sample, 16.2 percent (3.1) of respondents had lifetime major depression and 69.7 percent (8.5) of first onsets occurred before expected age of enlistment. Numerous methodological problems limit the results of the meta-analysis and simulation. The paper closes with a discussion of recommendations for correcting these problems in future surveillance and operational stress studies. PMID:22953441

  19. Prevalence of DSM-IV major depression among U.S. military personnel: meta-analysis and simulation.

    PubMed

    Gadermann, Anne M; Engel, Charles C; Naifeh, James A; Nock, Matthew K; Petukhova, Maria; Santiago, Patcho N; Wu, Benjamin; Zaslavsky, Alan M; Kessler, Ronald C

    2012-08-01

    A meta-analysis of 25 epidemiological studies estimated the prevalence of recent Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) major depression (MD) among U.S. military personnel. Best estimates of recent prevalence (standard error) were 12.0% (1.2) among currently deployed, 13.1% (1.8) among previously deployed, and 5.7% (1.2) among never deployed. Consistent correlates of prevalence were being female, enlisted, young (ages 17-25), unmarried, and having less than a college education. Simulation of data from a national general population survey was used to estimate expected lifetime prevalence of MD among respondents with the sociodemographic profile and none of the enlistment exclusions of Army personnel. In this Simulated sample, 16.2% (3.1) of respondents had lifetime MD and 69.7% (8.5) of first onsets occurred before expected age of enlistment. Numerous methodological problems limit the results of the meta-analysis and simulation. The article closes with a discussion of recommendations for correcting these problems in future surveillance and operational stress studies.

  20. Tutoring electronic troubleshooting in a simulated maintenance work environment

    NASA Technical Reports Server (NTRS)

    Gott, Sherrie P.

    1987-01-01

    A series of intelligent tutoring systems, or intelligent maintenance simulators, is being developed based on expert and novice problem solving data. A graded series of authentic troubleshooting problems provides the curriculum, and adaptive instructional treatments foster active learning in trainees who engage in extensive fault isolation practice and thus in conditionalizing what they know. A proof of concept training study involving human tutoring was conducted as a precursor to the computer tutors to assess this integrated, problem based approach to task analysis and instruction. Statistically significant improvements in apprentice technicians' troubleshooting efficiency were achieved after approximately six hours of training.

  1. HEAP: Heat Energy Analysis Program, a computer model simulating solar receivers. [solving the heat transfer problem

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1979-01-01

    A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.

  2. Coupled electromagnetic-thermodynamic simulations of microwave heating problems using the FDTD algorithm.

    PubMed

    Kopyt, Paweł; Celuch, Małgorzata

    2007-01-01

    A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.

  3. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    PubMed

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  4. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, E.; Shemon, E. R.; Yu, Y. Q.

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less

  5. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  6. Analysis of an Air Conditioning Coolant Solution for Metal Contamination Using Atomic Absorption Spectroscopy: An Undergraduate Instrumental Analysis Exercise Simulating an Industrial Assignment

    ERIC Educational Resources Information Center

    Baird, Michael J.

    2004-01-01

    A real-life analytical assignment is presented to students, who had to examine an air conditioning coolant solution for metal contamination using an atomic absorption spectroscopy (AAS). This hands-on access to a real problem exposed the undergraduate students to the mechanism of AAS, and promoted participation in a simulated industrial activity.

  7. Environmental Planning in Jonah's Basin: A Simulation Game and Experimental Analysis.

    ERIC Educational Resources Information Center

    Horsley, Doyne

    1982-01-01

    Described is a successfully field tested simulation which will help high school or college level students become familiar with flood hazards. Students assume the roles of members of the Jonah's Basin planning commission and plan solutions to the area's flood problems. (RM)

  8. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  9. Analysis of Plane-Parallel Electron Beam Propagation in Different Media by Numerical Simulation Methods

    NASA Astrophysics Data System (ADS)

    Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.

    2018-04-01

    Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.

  10. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  11. System dynamics and simulation of LSS

    NASA Technical Reports Server (NTRS)

    Ryan, R. F.

    1978-01-01

    Large Space Structures have many unique problems arising from mission objectives and the resulting configuration. Inherent in these configurations is a strong coupling among several of the designing disciplines. In particular, the coupling between structural dynamics and control is a key design consideration. The solution to these interactive problems requires efficient and accurate analysis, simulation and test techniques, and properly planned and conducted design trade studies. The discussion presented deals with these subjects and concludes with a brief look at some NASA capabilities which can support these technology studies.

  12. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  13. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    PubMed

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  14. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    PubMed Central

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774

  15. FAST: A multi-processed environment for visualization of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin

    1991-01-01

    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.

  16. The problem of natural funnel asymmetries: a simulation analysis of meta-analysis in macroeconomics.

    PubMed

    Callot, Laurent; Paldam, Martin

    2011-06-01

    Effect sizes in macroeconomic are estimated by regressions on data published by statistical agencies. Funnel plots are a representation of the distribution of the resulting regression coefficients. They are normally much wider than predicted by the t-ratio of the coefficients and often asymmetric. The standard method of meta-analysts in economics assumes that the asymmetries are because of publication bias causing censoring and adjusts the average accordingly. The paper shows that some funnel asymmetries may be 'natural' so that they occur without censoring. We investigate such asymmetries by simulating funnels by pairs of data generating processes (DGPs) and estimating models (EMs), in which the EM has the problem that it disregards a property of the DGP. The problems are data dependency, structural breaks, non-normal residuals, non-linearity, and omitted variables. We show that some of these problems generate funnel asymmetries. When they do, the standard method often fails. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  18. FOOD RISK ANALYSIS

    USDA-ARS?s Scientific Manuscript database

    Food risk analysis is a holistic approach to food safety because it considers all aspects of the problem. Risk assessment modeling is the foundation of food risk analysis. Proper design and simulation of the risk assessment model is important to properly predict and control risk. Because of knowl...

  19. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  20. A combined Eulerian-volume of fraction-Lagrangian method for atomization simulation

    NASA Technical Reports Server (NTRS)

    Seung, S. P.; Chen, C. P.; Ziebarth, John P.

    1994-01-01

    The tracking of free surfaces between liquid and gas phases and analysis of the interfacial phenomena between the two during the atomization and breakup process of a liquid fuel jet is modeled. Numerical modeling of liquid-jet atomization requires the resolution of different conservation equations. Detailed formulation and validation are presented for the confined dam broken problem, the water surface problem, the single droplet problem, a jet breakup problem, and the liquid column instability problem.

  1. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  2. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  3. Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Choi, S. B.; Ibrahim, A.

    2010-01-01

    A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.

  4. Strategic Mobility 21: Modeling, Simulation, and Analysis

    DTIC Science & Technology

    2010-04-14

    using AnyLogic , which is a Java programmed, multi-method simulation modeling tool developed by XJ Technologies. The last section examines the academic... simulation model from an Arena platform to an AnyLogic based Web Service. MATLAB is useful for small problems with few nodes, but GAMS/CPLEX is better... Transportation Modeling Studio TM . The SCASN modeling and simulation program was designed to be generic in nature to allow for use by both commercial and

  5. Asphalt pavement aging and temperature dependent properties using functionally graded viscoelastic model

    NASA Astrophysics Data System (ADS)

    Dave, Eshan V.

    Asphalt concrete pavements are inherently graded viscoelastic structures. Oxidative aging of asphalt binder and temperature cycling due to climatic conditions being the major cause of non-homogeneity. Current pavement analysis and simulation procedures dwell on the use of layered approach to account for these non-homogeneities. The conventional finite-element modeling (FEM) technique discretizes the problem domain into smaller elements, each with a unique constitutive property. However the assignment of unique material property description to an element in the FEM approach makes it an unattractive choice for simulation of problems with material non-homogeneities. Specialized elements such as "graded elements" allow for non-homogenous material property definitions within an element. This dissertation describes the development of graded viscoelastic finite element analysis method and its application for analysis of asphalt concrete pavements. Results show that the present research improves efficiency and accuracy of simulations for asphalt pavement systems. Some of the practical implications of this work include the new technique's capability for accurate analysis and design of asphalt pavements and overlay systems and for the determination of pavement performance with varying climatic conditions and amount of in-service age. Other application areas include simulation of functionally graded fiber-reinforced concrete, geotechnical materials, metal and metal composites at high temperatures, polymers, and several other naturally existing and engineered materials.

  6. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  7. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  8. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  9. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  10. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  11. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Development of Improved Models, Stochasticity, and Frameworks for the MIT Extensible Air Network Simulation

    NASA Technical Reports Server (NTRS)

    Clarke, John-Paul

    2004-01-01

    MEANS, the MIT Extensible Air Network Simulation, was created in February of 2001, and has been developed with support from NASA Ames since August of 2001. MEANS is a simulation tool which is designed to maximize fidelity without requiring data of such a low level as to preclude easy examination of alternative scenarios. To this end, MEANS is structured in a modular fashion to allow more detailed components to be brought in when desired, and left out when they would only be an impediment. Traditionally, one of the difficulties with high-fidelity models is that they require a level of detail in their data that is difficult to obtain. For analysis of past scenarios, the required data may not have been collected, or may be considered proprietary and thus difficult for independent researchers to obtain. For hypothetical scenarios, generation of the data is sufficiently difficult to be a task in and of itself. Often, simulations designed by a researcher will model exactly one element of the problem well and in detail, while assuming away other parts of the problem which are not of interest or for which data is not available. While these models are useful for working with the task at hand, they are very often not applicable to future problems. The MEAN Simulation attempts to address these problems by using a modular design which provides components of varying fidelity for each aspect of the simulation. This allows for the most accurate model for which data is available to be used. It also provides for easy analysis of sensitivity to data accuracy. This can be particularly useful in the case where accurate data is available for some subset of the situations that are to be considered. Furthermore, the ability to use the same model while examining effects on different parts of a system reduces the time spent learning the simulation, and provides for easier comparisons between changes to different parts of the system.

  13. Computer Simulation Is an Undervalued Tool for Genetic Analysis: A Historical View and Presentation of SHIMSHON – A Web-Based Genetic Simulation Package

    PubMed Central

    Greenberg, David A.

    2011-01-01

    Computer simulation methods are under-used tools in genetic analysis because simulation approaches have been portrayed as inferior to analytic methods. Even when simulation is used, its advantages are not fully exploited. Here, I present SHIMSHON, our package of genetic simulation programs that have been developed, tested, used for research, and used to generated data for Genetic Analysis Workshops (GAW). These simulation programs, now web-accessible, can be used by anyone to answer questions about designing and analyzing genetic disease studies for locus identification. This work has three foci: (1) the historical context of SHIMSHON's development, suggesting why simulation has not been more widely used so far. (2) Advantages of simulation: computer simulation helps us to understand how genetic analysis methods work. It has advantages for understanding disease inheritance and methods for gene searches. Furthermore, simulation methods can be used to answer fundamental questions that either cannot be answered by analytical approaches or cannot even be defined until the problems are identified and studied, using simulation. (3) I argue that, because simulation was not accepted, there was a failure to grasp the meaning of some simulation-based studies of linkage. This may have contributed to perceived weaknesses in linkage analysis; weaknesses that did not, in fact, exist. PMID:22189467

  14. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  15. Differential evolution-simulated annealing for multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  16. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  17. Nonisothermal Analysis of Solution Kinetics by Spreadsheet Simulation

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    A fast and generally applicable alternative solution to the problem of determining the useful shelf life of medicinal solutions is described. It illustrates the power and convenience of the combination of numerical simulation and nonlinear least squares with a practical pharmaceutical application of chemical kinetics and thermodynamics, validated…

  18. SimilarityExplorer: A visual inter-comparison tool for multifaceted climate data

    Treesearch

    J. Poco; A. Dasgupta; Y. Wei; W. Hargrove; C. Schwalm; R. Cook; E. Bertini; C. Silva

    2014-01-01

    Inter-comparison and similarity analysis to gauge consensus among multiple simulation models is a critical visualization problem for understanding climate change patterns. Climate models, specifically, Terrestrial Biosphere Models (TBM) represent time and space variable ecosystem processes, for example, simulations of photosynthesis and respiration, using algorithms...

  19. New Approaches to Motion Cuing in Flight Simulators

    DTIC Science & Technology

    1991-09-01

    iv Table of Contents 1.0 Introduction ............................. .......... ...... 1 1.1 The Problem of Motion Cuing in Flight Simulation...the Report ................ ................... 7 2.0 A Conceptual Model of Pilot Control .......... ............ 9 2.1 Introduction ...33 3.4 Task Analysis ................ ...................... .. 34 3.4.1 Introduction ................ ...................... 34 3.4.2 Discussion

  20. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  1. Application of Hybrid Real-Time Power System Simulator for Designing and Researching of Relay Protection and Automation

    NASA Astrophysics Data System (ADS)

    Borovikov, Yu S.; Sulaymanov, A. O.; Andreev, M. V.

    2015-10-01

    Development, research and operation of smart grids (SG) with active-adaptive networks (AAS) are actual tasks for today. Planned integration of high-speed FACTS devices greatly complicates complex dynamic properties of power systems. As a result the operating conditions of equipment of power systems are significantly changing. Such situation creates the new actual problem of development and research of relay protection and automation (RPA) which will be able to adequately operate in the SGs and adapt to its regimes. Effectiveness of solution of the problem depends on using tools - different simulators of electric power systems. Analysis of the most famous and widely exploited simulators led to the conclusion about the impossibility of using them for solution of the mentioned problem. In Tomsk Polytechnic University developed the prototype of hybrid multiprocessor software and hardware system - Hybrid Real-Time Power System Simulator (HRTSim). Because of its unique features this simulator can be used for solution of mentioned tasks. This article introduces the concept of development and research of relay protection and automation with usage of HRTSim.

  2. Numerical Analysis of Constrained Dynamical Systems, with Applications to Dynamic Contact of Solids, Nonlinear Elastodynamics and Fluid-Structure Interactions

    DTIC Science & Technology

    2000-12-01

    Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III

  3. F-14 modeling study

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Baron, S.

    1984-01-01

    Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.

  4. Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-01-25

    Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less

  5. Probability, Problem Solving, and "The Price is Right."

    ERIC Educational Resources Information Center

    Wood, Eric

    1992-01-01

    This article discusses the analysis of a decision-making process faced by contestants on the television game show "The Price is Right". The included analyses of the original and related problems concern pattern searching, inductive reasoning, quadratic functions, and graphing. Computer simulation programs in BASIC and tables of…

  6. Triple Value System Dynamics Modeling to Help Stakeholders Engage with Food-Energy-Water Problems

    EPA Science Inventory

    Triple Value (3V) Community scoping projects and Triple Value Simulation (3VS) models help decision makers and stakeholders apply systems-analysis methodology to complex problems related to food production, water quality, and energy use. 3VS models are decision support tools that...

  7. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Chaitin, L. J.; Duda, R. O.; Johanson, P. A.; Raphael, B.; Rosen, C. A.; Yates, R. A.

    1970-01-01

    The program is reported for developing techniques in artificial intelligence and their application to the control of mobile automatons for carrying out tasks autonomously. Visual scene analysis, short-term problem solving, and long-term problem solving are discussed along with the PDP-15 simulator, LISP-FORTRAN-MACRO interface, resolution strategies, and cost effectiveness.

  8. Cognitive simulation as a tool for cognitive task analysis.

    PubMed

    Roth, E M; Woods, D D; Pople, H E

    1992-10-01

    Cognitive simulations are runnable computer programs that represent models of human cognitive activities. We show how one cognitive simulation built as a model of some of the cognitive processes involved in dynamic fault management can be used in conjunction with small-scale empirical data on human performance to uncover the cognitive demands of a task, to identify where intention errors are likely to occur, and to point to improvements in the person-machine system. The simulation, called Cognitive Environment Simulation or CES, has been exercised on several nuclear power plant accident scenarios. Here we report one case to illustrate how a cognitive simulation tool such as CES can be used to clarify the cognitive demands of a problem-solving situation as part of a cognitive task analysis.

  9. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  10. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  11. PSAMM: A Portable System for the Analysis of Metabolic Models

    PubMed Central

    Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying

    2016-01-01

    The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies. PMID:26828591

  12. Solution to the indexing problem of frequency domain simulation experiments

    NASA Technical Reports Server (NTRS)

    Mitra, Mousumi; Park, Stephen K.

    1991-01-01

    A frequency domain simulation experiment is one in which selected system parameters are oscillated sinusoidally to induce oscillations in one or more system statistics of interest. A spectral (Fourier) analysis of these induced oscillations is then performed. To perform this spectral analysis, all oscillation frequencies must be referenced to a common, independent variable - an oscillation index. In a discrete-event simulation, the global simulation clock is the most natural choice for the oscillation index. However, past efforts to reference all frequencies to the simulation clock generally yielded unsatisfactory results. The reason for these unsatisfactory results is explained in this paper and a new methodology which uses the simulation clock as the oscillation index is presented. Techniques for implementing this new methodology are demonstrated by performing a frequency domain simulation experiment for a network of queues.

  13. The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator.

    PubMed

    Roh, S D; Kim, S W; Cho, W S

    2001-10-01

    The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator were accomplished. In the numerical modelling, two models applied to the modelling within the kiln are the combustion chamber model including the mass and energy balance equations for two combustion chambers and 3D thermal model. The combustion chamber model predicts temperature within the kiln, flue gas composition, flux and heat of combustion. Using the combustion chamber model and 3D thermal model, the production-rules for the process simulation can be obtained through interrelation analysis between control and operation variables. The process simulation of the kiln is operated with the production-rules for automatic operation. The process simulation aims to provide fundamental solutions to the problems in incineration process by introducing an online expert control system to provide an integrity in process control and management. Knowledge-based expert control systems use symbolic logic and heuristic rules to find solutions for various types of problems. It was implemented to be a hybrid intelligent expert control system by mutually connecting with the process control systems which has the capability of process diagnosis, analysis and control.

  14. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  15. Shuttle environmental and thermal control/life support system computer program, supplement 1

    NASA Technical Reports Server (NTRS)

    Ayotte, W. J.

    1975-01-01

    The computer programs developed to simulate the RSECS (Representative Shuttle Environmental Control System) were described. These programs were prepared to provide pretest predictions, post-test analysis and real time problem analysis for RSECS test planning and evaluation.

  16. Pattern classification of fMRI data: applications for analysis of spatially distributed cortical networks.

    PubMed

    Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C

    2014-08-01

    The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  18. Learning and Learning-to-Learn by Doing: Simulating Corporate Practice in Law School.

    ERIC Educational Resources Information Center

    Okamoto, Karl S.

    1995-01-01

    A law school course in advanced corporate legal practice is described. The course, a series of simulated lawyering tasks centered on a hypothetical leveraged buyout transaction, is designed to go beyond basic legal analysis to develop professional expertise in legal problem solving. The course description includes goals, syllabus design,…

  19. From petascale to exascale, the future of simulated climate data (Invited)

    NASA Astrophysics Data System (ADS)

    Lawrence, B.; Juckes, M. N.

    2013-12-01

    Coleridge ought to have said: data, data, everywhere, and all the data centres groan, data data everywhere, nor any I should clone. Except of course, he didn't say it, and we do clone data! While we've been dealing with terabytes of simulated datasets, downloading ("cloning") and analysing, has been a plausible way forward. In doing so, we have set up systems that support four broad classes of activities: personal and institutional data analysis, federated data systems, and data portals. We use metadata to manage the migration of data between these (and their communities) and we have built software systems. However, our metadata and software solutions are fragile, often based on soft money, and loose governance arrangements. We often download data with minimal provenance, and often many of us download the same data. In the not too distant future we can imagine exabytes of data being produced, and all these problems will get worse. Arguably we have no plausible methods of effectively exploiting such data - particularly if the analysis requires intercomparison. Yet of course, we know full well that intercomparison is at the heart of climate science. In this talk, we review the current status of simulation data management, with special emphasis on accessibility and usability. We talk about file formats, bundles of files, real and virtual, and simulation metadata. We introduce the InfraStructure for the European Network for Earth Simulation (IS-ENES) and its relationship with the Earth System Grid Federation (ESGF) as well as JASMIN, the UK Joint Analysis System. There will be a small digression on parallel data analysis - locally and distributed. we then progress to the near term problems (and solutions) for climate data before scoping out the problems of the future, both for data handling, and the models that produce the data. The way we think about data, computing, models, even ensemble design, may need to change.

  20. An Analysis of Delay and Travel Times at Sao Paulo International Airport (AISP/GRU): Planning Based on Simulation Model

    NASA Technical Reports Server (NTRS)

    Santana, Erico Soriano Martins; Mueller, Carlos

    2003-01-01

    The occurrence of flight delays in Brazil, mostly verified at the ground (airfield), is responsible for serious disruptions at the airport level but also for the unchaining of problems in all the airport system, affecting also the airspace. The present study develops an analysis of delay and travel times at Sao Paulo International Airport/ Guarulhos (AISP/GRU) airfield based on simulation model. Different airport physical and operational scenarios had been analyzed by means of simulation. SIMMOD Plus 4.0, the computational tool developed to represent aircraft operation in the airspace and airside of airports, was used to perform these analysis. The study was mainly focused on aircraft operations on ground, at the airport runway, taxi-lanes and aprons. The visualization of the operations with increasing demand facilitated the analyses. The results generated in this work certify the viability of the methodology, they also indicated the solutions capable to solve the delay problem by travel time analysis, thus diminishing the costs for users mainly airport authority. It also indicated alternatives for airport operations, assisting the decision-making process and in the appropriate timing of the proposed changes in the existing infrastructure.

  1. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  2. Simulators for Maintenance Training: Some Issues, Problems and Areas for Future Research

    DTIC Science & Technology

    1978-07-01

    trainer into a full-scale, three-dimensional simulation of one cabinet of the NIKE HIPAR system. Test points for troubleshooting were located on simulated...described was used to teach maintenance of the NIKE HIPAR system. It too was considered to be a general purpose trainer in that its basic features could be...types of maintenance simulators based on a detailed task analysis of the NIKE HIPAR system as it existed one year before it was scheduled to become

  3. Simulator sickness in a helicopter flight training school.

    PubMed

    Webb, Catherine M; Bass, Julie M; Johnson, David M; Kelley, Amanda M; Martin, Christopher R; Wildzunas, Robert M

    2009-06-01

    Simulator sickness (SS) is a common problem during flight training and can affect both instructor pilots (IP) and student pilots (SP). This study was conducted in response to complaints about a high incidence of SS associated with use of new simulators for rotary-wing aircraft. The problem was evaluated using the Simulator Sickness Questionnaire (SSQ) to collect data on 73 IP and 129 SP who used the new simulators. Based on analysis of these data, operator comments, and a search of the literature, we recommended limiting simulator flights to 2 h, removing unusual or unnatural maneuvers, turning off the sidescreens to reduce the field-of-view, avoiding use of improperly calibrated simulators until repaired, and stressing proper rest and health discipline among the pilots. The success of these measures was evaluated 1 yr later by collecting SSQ data on 25 IP and 50 SP. There was a main effect of time, in that after the recommendations were implemented, there was a significant reduction in nausea, oculomotor, and total SSQ scores from the pre-study to the post-study. There was also a main effect of experience, as IP reported significantly greater SS than SP for the same scores. Implementation of the recommendations reduced SS in the new simulators at the cost of limiting session duration and shutting down some simulator features. Although the optimal solution to the SS problem lies in addressing SS during a simulator's design stage, these recommendations can be used as interim solutions to reduce SS.

  4. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  5. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  6. Development of students' conceptual thinking by means of video analysis and interactive simulations at technical universities

    NASA Astrophysics Data System (ADS)

    Hockicko, Peter; Krišt‧ák, L.‧uboš; Němec, Miroslav

    2015-03-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think creatively, improves their performance and helps them in studying physics. This paper deals with increasing the key competencies in engineering by analysing real-life situation videos - physical problems - by means of video analysis and the modelling tools using the program Tracker and simulations of physical phenomena from The Physics Education Technology (PhET™) Project (VAS method of problem tasks). The statistical testing using the t-test confirmed the significance of the differences in the knowledge of the experimental and control groups, which were the result of interactive method application.

  7. Simulation of short-term electric load using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.

    2018-01-01

    While solving the task of optimizing operation modes and equipment composition of small energy complexes or other tasks connected with energy planning, it is necessary to have data on energy loads of a consumer. Usually, there is a problem with obtaining real load charts and detailed information about the consumer, because a method of load-charts simulation on the basis of minimal information should be developed. The analysis of work devoted to short-term loads prediction allows choosing artificial neural networks as a most suitable mathematical instrument for solving this problem. The article provides an overview of applied short-term load simulation methods; it describes the advantages of artificial neural networks and offers a neural network structure for electric loads of residential buildings simulation. The results of modeling loads with proposed method and the estimation of its error are presented.

  8. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  9. Solving search problems by strongly simulating quantum circuits

    PubMed Central

    Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.

    2013-01-01

    Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585

  10. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  11. Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis

    NASA Astrophysics Data System (ADS)

    Chou, Hui-Yu; Yang, Jyh-Bin

    2017-10-01

    The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.

  12. Virtual Environment Computer Simulations to Support Human Factors Engineering and Operations Analysis for the RLV Program

    NASA Technical Reports Server (NTRS)

    Lunsford, Myrtis Leigh

    1998-01-01

    The Army-NASA Virtual Innovations Laboratory (ANVIL) was recently created to provide virtual reality tools for performing Human Engineering and operations analysis for both NASA and the Army. The author's summer research project consisted of developing and refining these tools for NASA's Reusable Launch Vehicle (RLV) program. Several general simulations were developed for use by the ANVIL for the evaluation of the X34 Engine Changeout procedure. These simulations were developed with the software tool dVISE 4.0.0 produced by Division Inc. All software was run on an SGI Indigo2 High Impact. This paper describes the simulations, various problems encountered with the simulations, other summer activities, and possible work for the future. We first begin with a brief description of virtual reality systems.

  13. The discovery of the causes of leprosy: A computational analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corruble, V.; Ganascia, J.G.

    1996-12-31

    The role played by the inductive inference has been studied extensively in the field of Scientific Discovery. The work presented here tackles the problem of induction in medical research. The discovery of the causes of leprosy is analyzed and simulated using computational means. An inductive algorithm is proposed, which is successful in simulating some essential steps in the progress of the understanding of the disease. It also allows us to simulate the false reasoning of previous centuries through the introduction of some medical a priori inherited form archaic medicine. Corroborating previous research, this problem illustrates the importance of the socialmore » and cultural environment on the way the inductive inference is performed in medicine.« less

  14. A study on the operation analysis of the power conditioning system with real HTS SMES coil

    NASA Astrophysics Data System (ADS)

    Kim, A. R.; Jung, H. Y.; Kim, J. H.; Ali, Mohd. Hasan; Park, M.; Yu, I. K.; Kim, H. J.; Kim, S. H.; Seong, K. C.

    2008-09-01

    Voltage sag from sudden increasing loads is one of the major problems in the utility network. In order to compensate the voltage sag problem, power compensation devices have widely been developed. In the case of voltage sag, it needs an energy source to overcome the energy caused by voltage sag. According as the SMES device is characterized by its very high response time of charge and discharge, it has widely been researched and developed for more than 20 years. However, before the installation of SMES into utility, the system analysis has to be carried out with a certain simulation tool. This paper presents a real-time simulation algorithm for the SMES system by using the miniaturized SMES model coil whose properties are same as those of real size SMES coil. With this method, researchers can easily analyse the performance of SMES connected into utility network by abstracting the properties from the real modeled SMES coil and using the virtual simulated power network in RSCAD/RTDS.

  15. The feasibility of well-logging measurements of arsenic levels using neutron-activation analysis

    USGS Publications Warehouse

    Oden, C.P.; Schweitzer, J.S.; McDowell, G.M.

    2006-01-01

    Arsenic is an extremely toxic metal, which poses a significant problem in many mining environments. Arsenic contamination is also a major problem in ground and surface waters. A feasibility study was conducted to determine if neutron-activation analysis is a practical method of measuring in situ arsenic levels. The response of hypothetical well-logging tools to arsenic was simulated using a readily available Monte Carlo simulation code (MCNP). Simulations were made for probes with both hyperpure germanium (HPGe) and bismuth germanate (BGO) detectors using accelerator and isotopic neutron sources. Both sources produce similar results; however, the BGO detector is much more susceptible to spectral interference than the HPGe detector. Spectral interference from copper can preclude low-level arsenic measurements when using the BGO detector. Results show that a borehole probe could be built that would measure arsenic concentrations of 100 ppm by weight to an uncertainty of 50 ppm in about 15 min. ?? 2006 Elsevier Ltd. All rights reserved.

  16. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  17. The Effectiveness and Efficiency of Two Types of Simulation as Functions of Level of Elementary Education Training. Final Report.

    ERIC Educational Resources Information Center

    Girod, Gerald R.

    An experiment was performed to determine the efficiency of simulation teaching techniques in training elementary education teachers to identify and correct classroom management problems. The two presentation modes compared were film and audiotape. Twelve hypotheses were tested via analysis of variance to determine the relative efficiency of these…

  18. The Virtual Genetics Lab II: Improvements to a Freely Available Software Simulation of Genetics

    ERIC Educational Resources Information Center

    White, Brian T.

    2012-01-01

    The Virtual Genetics Lab II (VGLII) is an improved version of the highly successful genetics simulation software, the Virtual Genetics Lab (VGL). The software allows students to use the techniques of genetic analysis to design crosses and interpret data to solve realistic genetics problems involving a hypothetical diploid insect. This is a brief…

  19. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  20. Extended Importance Sampling for Reliability Analysis under Evidence Theory

    NASA Astrophysics Data System (ADS)

    Yuan, X. K.; Chen, B.; Zhang, B. Q.

    2018-05-01

    In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.

  1. Harmonic analysis of electrified railway based on improved HHT

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    In this paper, the causes and harms of the current electric locomotive electrical system harmonics are firstly studied and analyzed. Based on the characteristics of the harmonics in the electrical system, the Hilbert-Huang transform method is introduced. Based on the in-depth analysis of the empirical mode decomposition method and the Hilbert transform method, the reasons and solutions to the endpoint effect and modal aliasing problem in the HHT method are explored. For the endpoint effect of HHT, this paper uses point-symmetric extension method to extend the collected data; In allusion to the modal aliasing problem, this paper uses the high frequency harmonic assistant method to preprocess the signal and gives the empirical formula of high frequency auxiliary harmonic. Finally, combining the suppression of HHT endpoint effect and modal aliasing problem, an improved HHT method is proposed and simulated by matlab. The simulation results show that the improved HHT is effective for the electric locomotive power supply system.

  2. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  3. Analysis of Inlet-Compressor Acoustic Interactions Using Coupled CFD Codes

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Townsend, S. E.; Cole, G. L.; Slater, J. W.; Chima, R.

    1998-01-01

    A problem that arises in the numerical simulation of supersonic inlets is the lack of a suitable boundary condition at the engine face. In this paper, a coupled approach, in which the inlet computation is coupled dynamically to a turbomachinery computation, is proposed as a means to overcome this problem. The specific application chosen for validation of this approach is the collapsing bump experiment performed at the University of Cincinnati. The computed results are found to be in reasonable agreement with experimental results. The coupled simulation results could also be used to aid development of a simplified boundary condition.

  4. Vectorization of a Monte Carlo simulation scheme for nonequilibrium gas dynamics

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1991-01-01

    Significant improvement has been obtained in the numerical performance of a Monte Carlo scheme for the analysis of nonequilibrium gas dynamics through an implementation of the algorithm which takes advantage of vector hardware, as presently demonstrated through application to three different problems. These are (1) a 1D standing-shock wave; (2) the flow of an expanding gas through an axisymmetric nozzle; and (3) the hypersonic flow of Ar gas over a 3D wedge. Problem (3) is illustrative of the greatly increased number of molecules which the simulation may involve, thanks to improved algorithm performance.

  5. Energy Efficient Data Transmission for Sensors with Wireless Charging

    PubMed Central

    Luo, Junzhou; Wu, Weiwei; Gao, Hong

    2018-01-01

    This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of (1−12m)(OPT−E) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2m2m−1OPT+1 performance bound for the general problem. Simulations confirm the analysis of the algorithms. PMID:29419770

  6. Energy Efficient Data Transmission for Sensors with Wireless Charging.

    PubMed

    Fang, Xiaolin; Luo, Junzhou; Wu, Weiwei; Gao, Hong

    2018-02-08

    This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of ( 1 - 1 2 m ) ( O P T - E ) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2 m 2 m - 1 O P T + 1 performance bound for the general problem. Simulations confirm the analysis of the algorithms.

  7. Digital data processing system dynamic loading analysis

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Tucker, A. E.

    1976-01-01

    Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.

  8. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  9. Lemonade's the Name, Simulation's the Game.

    ERIC Educational Resources Information Center

    Friel, Susan

    1983-01-01

    Provides a detailed description of Lemonade, a business game designed to introduce elementary and secondary students to the basics of business; i.e., problem solving strategies, hypothesis formulation and testing, trend analysis, prediction, comparative analysis, and effects of such factors as advertising and climatic conditions on sales and…

  10. Applying Uncertainty Analysis to a Risk Assessment for the Pesticide Permethrin

    EPA Science Inventory

    We discuss the application of methods of uncertainty analysis from our previous poster to the problem of a risk assessment for exposure to the food-use pesticide permethrin resulting from residential pesticide crack and crevice application. Exposures are simulated by the SHEDS (S...

  11. FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna

    2016-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  12. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  13. Xyce parallel electronic simulator users guide, version 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  14. Xyce parallel electronic simulator users' guide, Version 6.0.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  15. Xyce parallel electronic simulator users guide, version 6.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  16. On the ambiguity of conformational states: A B&S-LEUS simulation study of the helical conformations of decaalanine in water

    NASA Astrophysics Data System (ADS)

    Bieler, Noah S.; Hünenberger, Philippe H.

    2015-04-01

    Estimating the relative stabilities of different conformational states of a (bio-)molecule using molecular dynamics simulations involves two challenging problems: the conceptual problem of how to define the states of interest and the technical problem of how to properly sample these states, along with achieving a sufficient number of interconversion transitions. In this study, the two issues are addressed in the context of a decaalanine peptide in water, by considering the 310-, α-, and π-helical states. The simulations rely on the ball-and-stick local-elevation umbrella-sampling (B&S-LEUS) method. In this scheme, the states are defined as hyperspheres (balls) in a (possibly high dimensional) collective-coordinate space and connected by hypercylinders (sticks) to ensure transitions. A new object, the pipe, is also introduced here to handle curvilinear pathways. Optimal sampling within the so-defined space is ensured by confinement and (one-dimensional) memory-based biasing potentials associated with the three different kinds of objects. The simulation results are then analysed in terms of free energies using reweighting, possibly relying on two distinct sets of collective coordinates for the state definition and analysis. The four possible choices considered for these sets are Cartesian coordinates, hydrogen-bond distances, backbone dihedral angles, or pairwise sums of successive backbone dihedral angles. The results concerning decaalanine underline that the concept of conformational state may be extremely ambiguous, and that its tentative absolute definition as a free-energy basin remains subordinated to the choice of a specific analysis space. For example, within the force-field employed and depending on the analysis coordinates selected, the 310-helical state may refer to weakly overlapping collections of conformations, differing by as much as 25 kJ mol-1 in terms of free energy. As another example, the π-helical state appears to correspond to a free-energy basin for three choices of analysis coordinates, but to be unstable with the fourth one. The problem of conformational-state definition may become even more intricate when comparison with experiment is involved, where the state definition relies on spectroscopic or functional observables.

  17. On the ambiguity of conformational states: A B&S-LEUS simulation study of the helical conformations of decaalanine in water.

    PubMed

    Bieler, Noah S; Hünenberger, Philippe H

    2015-04-28

    Estimating the relative stabilities of different conformational states of a (bio-)molecule using molecular dynamics simulations involves two challenging problems: the conceptual problem of how to define the states of interest and the technical problem of how to properly sample these states, along with achieving a sufficient number of interconversion transitions. In this study, the two issues are addressed in the context of a decaalanine peptide in water, by considering the 310-, α-, and π-helical states. The simulations rely on the ball-and-stick local-elevation umbrella-sampling (B&S-LEUS) method. In this scheme, the states are defined as hyperspheres (balls) in a (possibly high dimensional) collective-coordinate space and connected by hypercylinders (sticks) to ensure transitions. A new object, the pipe, is also introduced here to handle curvilinear pathways. Optimal sampling within the so-defined space is ensured by confinement and (one-dimensional) memory-based biasing potentials associated with the three different kinds of objects. The simulation results are then analysed in terms of free energies using reweighting, possibly relying on two distinct sets of collective coordinates for the state definition and analysis. The four possible choices considered for these sets are Cartesian coordinates, hydrogen-bond distances, backbone dihedral angles, or pairwise sums of successive backbone dihedral angles. The results concerning decaalanine underline that the concept of conformational state may be extremely ambiguous, and that its tentative absolute definition as a free-energy basin remains subordinated to the choice of a specific analysis space. For example, within the force-field employed and depending on the analysis coordinates selected, the 310-helical state may refer to weakly overlapping collections of conformations, differing by as much as 25 kJ mol(-1) in terms of free energy. As another example, the π-helical state appears to correspond to a free-energy basin for three choices of analysis coordinates, but to be unstable with the fourth one. The problem of conformational-state definition may become even more intricate when comparison with experiment is involved, where the state definition relies on spectroscopic or functional observables.

  18. Enabling Microscopic Simulators to Perform System Level Tasks: A System-Identification Based, Closure-on-Demand Toolkit for Multiscale Simulation Stability/Bifurcation Analysis, Optimization and Control

    DTIC Science & Technology

    2006-10-01

    The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W

  19. Aviation Safety Program Atmospheric Environment Safety Technologies (AEST) Project

    NASA Technical Reports Server (NTRS)

    Colantonio, Ron

    2011-01-01

    Engine Icing: Characterization and Simulation Capability: Develop knowledge bases, analysis methods, and simulation tools needed to address the problem of engine icing; in particular, ice-crystal icing Airframe Icing Simulation and Engineering Tool Capability: Develop and demonstrate 3-D capability to simulate and model airframe ice accretion and related aerodynamic performance degradation for current and future aircraft configurations in an expanded icing environment that includes freezing drizzle/rain Atmospheric Hazard Sensing and Mitigation Technology Capability: Improve and expand remote sensing and mitigation of hazardous atmospheric environments and phenomena

  20. FEAMAC-CARES Software Coupling Development Effort for CMC Stochastic-Strength-Based Damage Simulation

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  1. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  2. ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.

    1999-03-01

    ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less

  3. Interfacing modules for integrating discipline specific structural mechanics codes

    NASA Technical Reports Server (NTRS)

    Endres, Ned M.

    1989-01-01

    An outline of the organization and capabilities of the Engine Structures Computational Simulator (Simulator) at NASA Lewis Research Center is given. One of the goals of the research at Lewis is to integrate various discipline specific structural mechanics codes into a software system which can be brought to bear effectively on a wide range of engineering problems. This system must possess the qualities of being effective and efficient while still remaining user friendly. The simulator was initially designed for the finite element simulation of gas jet engine components. Currently, the simulator has been restricted to only the analysis of high pressure turbine blades and the accompanying rotor assembly, although the current installation can be expanded for other applications. The simulator presently assists the user throughout its procedures by performing information management tasks, executing external support tasks, organizing analysis modules and executing these modules in the user defined order while maintaining processing continuity.

  4. Development of an expert system for analysis of Shuttle atmospheric revitalization and pressure control subsystem anomalies

    NASA Technical Reports Server (NTRS)

    Lafuse, Sharon A.

    1991-01-01

    The paper describes the Shuttle Leak Management Expert System (SLMES), a preprototype expert system developed to enable the ECLSS subsystem manager to analyze subsystem anomalies and to formulate flight procedures based on flight data. The SLMES combines the rule-based expert system technology with the traditional FORTRAN-based software into an integrated system. SLMES analyzes the data using rules, and, when it detects a problem that requires simulation, it sets up the input for the FORTRAN-based simulation program ARPCS2AT2, which predicts the cabin total pressure and composition as a function of time. The program simulates the pressure control system, the crew oxygen masks, the airlock repress/depress valves, and the leakage. When the simulation has completed, other SLMES rules are triggered to examine the results of simulation contrary to flight data and to suggest methods for correcting the problem. Results are then presented in form of graphs and tables.

  5. Qualitative Evaluation of a Role Play Bullying Simulation

    PubMed Central

    Gillespie, Gordon L.; Brown, Kathryn; Grubb, Paula; Shay, Amy; Montoya, Karen

    2015-01-01

    Bullying against nurses is becoming a pervasive problem. In this article, a role play simulation designed for undergraduate nursing students is described. In addition, the evaluation findings from a subsample of students who participated in a role play simulation addressing bullying behaviors are reported. Focus group sessions were completed with a subset of eight students who participated in the intervention. Sessions were audiorecorded, transcribed verbatim, and analyzed using Colaizzi’s procedural steps for qualitative analysis. Themes derived from the data were “The Experience of Being Bullied”, “Implementation of the Program”, “Desired Outcome of the Program”, and “Context of Bullying in the Nursing Profession”. Role play simulation was an effective and active learning strategy to diffuse education on bullying in nursing practice. Bullying in nursing was identified as a problem worthy of incorporation into the undergraduate nursing curriculum. To further enhance the learning experience with role play simulation, adequate briefing instructions, opportunity to opt out of the role play, and comprehensive debriefing are essential. PMID:26504502

  6. Qualitative Evaluation of a Role Play Bullying Simulation.

    PubMed

    Gillespie, Gordon L; Brown, Kathryn; Grubb, Paula; Shay, Amy; Montoya, Karen

    Bullying against nurses is becoming a pervasive problem. In this article, a role play simulation designed for undergraduate nursing students is described. In addition, the evaluation findings from a subsample of students who participated in a role play simulation addressing bullying behaviors are reported. Focus group sessions were completed with a subset of eight students who participated in the intervention. Sessions were audiorecorded, transcribed verbatim, and analyzed using Colaizzi's procedural steps for qualitative analysis. Themes derived from the data were "The Experience of Being Bullied", "Implementation of the Program", "Desired Outcome of the Program", and "Context of Bullying in the Nursing Profession". Role play simulation was an effective and active learning strategy to diffuse education on bullying in nursing practice. Bullying in nursing was identified as a problem worthy of incorporation into the undergraduate nursing curriculum. To further enhance the learning experience with role play simulation, adequate briefing instructions, opportunity to opt out of the role play, and comprehensive debriefing are essential.

  7. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  8. System analysis in forest resources: proceedings of the 2003 symposium.

    Treesearch

    Michael Bevers; Tara M. Barrett

    2005-01-01

    The 2003 symposium of systems analysis in forest resources brought together researchers and practitioners who apply methods of optimization, simulation, management science, and systems analysis to forestry problems. This was the 10th symposium in the series, with previous conferences held in 1975, 1985, 1988, 1991, 1993, 1994, 1997, 2000, and 2002. The forty-two papers...

  9. Multi-mission space vehicle subsystem analysis tools

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Wood, E.

    2003-01-01

    Spacecraft engineers often rely on specialized simulation tools to facilitate the analysis, design and operation of space systems. Unfortunately these tools are often designed for one phase of a single mission and cannot be easily adapted to other phases or other misions. The Multi-Mission Pace Vehicle Susbsystem Analysis Tools are designed to provide a solution to this problem.

  10. Hybrid-finite-element analysis of some nonlinear and 3-dimensional problems of engineering fracture mechanics

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Nakagaki, M.; Kathiresan, K.

    1980-01-01

    In this paper, efficient numerical methods for the analysis of crack-closure effects on fatigue-crack-growth-rates, in plane stress situations, and for the solution of stress-intensity factors for arbitrary shaped surface flaws in pressure vessels, are presented. For the former problem, an elastic-plastic finite element procedure valid for the case of finite deformation gradients is developed and crack growth is simulated by the translation of near-crack-tip elements with embedded plastic singularities. For the latter problem, an embedded-elastic-singularity hybrid finite element method, which leads to a direct evaluation of K-factors, is employed.

  11. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  12. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  13. Meshless Method for Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.

  14. Numerical design and analysis of parasitic mode oscillations for 95 GHz gyrotron beam tunnel

    NASA Astrophysics Data System (ADS)

    Kumar, Nitin; Singh, Udaybir; Yadav, Vivek; Kumar, Anil; Sinha, A. K.

    2013-05-01

    The beam tunnel, equipped with the high lossy ceramics, is designed for 95 GHz gyrotron. The geometry of the beam tunnel is optimized considering the maximum RF absorption (ideally 100%) and the suppression of parasitic oscillations. The excitation of parasitic modes is a concerning problem for high frequency, high power gyrotrons. Considering the problem of parasitic mode excitation in beam tunnel, a detail analysis is performed for the suppression of these kinds of modes. Trajectory code EGUN and CST Microwave Studio are used for the simulations of electron beam trajectory and electromagnetic analysis, respectively.

  15. Twentieth Annual Conference on Manual Control, Volume 2

    NASA Technical Reports Server (NTRS)

    Hart, S. G. (Compiler); Hartzell, E. J. (Compiler)

    1984-01-01

    Volume II contains thirty two complete manuscripts and five abstracts. The topics covered include the application of event-related brain potential analysis to operational problems, the subjective evaluation of workload, mental models, training, crew interaction analysis, multiple task performance, and the measurement of workload and performance in simulation.

  16. Parallel AFSA algorithm accelerating based on MIC architecture

    NASA Astrophysics Data System (ADS)

    Zhou, Junhao; Xiao, Hong; Huang, Yifan; Li, Yongzhao; Xu, Yuanrui

    2017-05-01

    Analysis AFSA past for solving the traveling salesman problem, the algorithm efficiency is often a big problem, and the algorithm processing method, it does not fully responsive to the characteristics of the traveling salesman problem to deal with, and therefore proposes a parallel join improved AFSA process. The simulation with the current TSP known optimal solutions were analyzed, the results showed that the AFSA iterations improved less, on the MIC cards doubled operating efficiency, efficiency significantly.

  17. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    NASA Astrophysics Data System (ADS)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  18. Assessing the detail needed to capture rainfall-runoff dynamics with physics-based hydrologic response simulation

    USGS Publications Warehouse

    Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.

    2011-01-01

    Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.

  19. Lattice Boltzmann computation of creeping fluid flow in roll-coating applications

    NASA Astrophysics Data System (ADS)

    Rajan, Isac; Kesana, Balashanker; Perumal, D. Arumuga

    2018-04-01

    Lattice Boltzmann Method (LBM) has advanced as a class of Computational Fluid Dynamics (CFD) methods used to solve complex fluid systems and heat transfer problems. It has ever-increasingly attracted the interest of researchers in computational physics to solve challenging problems of industrial and academic importance. In this current study, LBM is applied to simulate the creeping fluid flow phenomena commonly encountered in manufacturing technologies. In particular, we apply this novel method to simulate the fluid flow phenomena associated with the "meniscus roll coating" application. This prevalent industrial problem encountered in polymer processing and thin film coating applications is modelled as standard lid-driven cavity problem to which creeping flow analysis is applied. This incompressible viscous flow problem is studied in various speed ratios, the ratio of upper to lower lid speed in two different configurations of lid movement - parallel and anti-parallel wall motion. The flow exhibits interesting patterns which will help in design of roll coaters.

  20. A Program for Solving the Brain Ischemia Problem

    PubMed Central

    DeGracia, Donald J.

    2013-01-01

    Our recently described nonlinear dynamical model of cell injury is here applied to the problems of brain ischemia and neuroprotection. We discuss measurement of global brain ischemia injury dynamics by time course analysis. Solutions to proposed experiments are simulated using hypothetical values for the model parameters. The solutions solve the global brain ischemia problem in terms of “master bifurcation diagrams” that show all possible outcomes for arbitrary durations of all lethal cerebral blood flow (CBF) decrements. The global ischemia master bifurcation diagrams: (1) can map to a single focal ischemia insult, and (2) reveal all CBF decrements susceptible to neuroprotection. We simulate measuring a neuroprotectant by time course analysis, which revealed emergent nonlinear effects that set dynamical limits on neuroprotection. Using over-simplified stroke geometry, we calculate a theoretical maximum protection of approximately 50% recovery. We also calculate what is likely to be obtained in practice and obtain 38% recovery; a number close to that often reported in the literature. The hypothetical examples studied here illustrate the use of the nonlinear cell injury model as a fresh avenue of approach that has the potential, not only to solve the brain ischemia problem, but also to advance the technology of neuroprotection. PMID:24961411

  1. Spectral simulation of unsteady compressible flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Gottlieb, David

    1990-01-01

    An unsteady compressible viscous wake flow past a circular cylinder was successfully simulated using spectral methods. A new approach in using the Chebyshev collocation method for periodic problems is introduced. It was further proved that the eigenvalues associated with the differentiation matrix are purely imaginary, reflecting the periodicity of the problem. It was been shown that the solution of a model problem has exponential growth in time if improper boundary conditions are used. A characteristic boundary condition, which is based on the characteristics of the Euler equations of gas dynamics, was derived for the spectral code. The primary vortex shedding frequency computed agrees well with the results in the literature for Mach = 0.4, Re = 80. No secondary frequency is observed in the power spectrum analysis of the pressure data.

  2. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  3. Statistical Emulator for Expensive Classification Simulators

    NASA Technical Reports Server (NTRS)

    Ross, Jerret; Samareh, Jamshid A.

    2016-01-01

    Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.

  4. Processor farming in two-level analysis of historical bridge

    NASA Astrophysics Data System (ADS)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  5. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations

    PubMed Central

    Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364

  6. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.

    PubMed

    Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.

  7. Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain

    NASA Technical Reports Server (NTRS)

    Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem

    2016-01-01

    The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.

  8. Partition functions of thermally dissociating diatomic molecules and related momentum problem

    NASA Astrophysics Data System (ADS)

    Buchowiecki, Marcin

    2017-11-01

    The anharmonicity and ro-vibrational coupling in ro-vibrational partition functions of diatomic molecules are analyzed for the high temperatures of the thermal dissociation regime. The numerically exact partition functions and thermal energies are calculated. At the high temperatures the proper integration of momenta is important if the partition function of the molecule, understood as bounded system, is to be obtained. The problem of proper treatment of momentum is crucial for correctness of high temperature molecular simulations as the decomposition of simulated molecule have to be avoided; the analysis of O2, H2+, and NH3 molecules allows to show importance of βDe value.

  9. High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.

  10. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  11. DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses

    NASA Astrophysics Data System (ADS)

    Petschke, Danny; Staab, Torsten E. M.

    2018-01-01

    The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.

  12. Space Shuttle/TDRSS communication and tracking systems analysis

    NASA Astrophysics Data System (ADS)

    Lindsey, W. C.; Chie, C. M.; Cideciyan, R.; Dessouky, K.; Su, Y. T.; Tsang, C. S.

    1986-04-01

    In order to evaluate the technical and operational problem areas and provide a recommendation, the enhancements to the Tracking and Data Delay Satellite System (TDRSS) and Shuttle must be evaluated through simulation and analysis. These enhancement techniques must first be characterized, then modeled mathematically, and finally updated into LinCsim (analytical simulation package). The LinCsim package can then be used as an evaluation tool. Three areas of potential enhancements were identified: shuttle payload accommodations, TDRSS SSA and KSA services, and shuttle tracking system and navigation sensors. Recommendations for each area were discussed.

  13. Space Shuttle/TDRSS communication and tracking systems analysis

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Chie, C. M.; Cideciyan, R.; Dessouky, K.; Su, Y. T.; Tsang, C. S.

    1986-01-01

    In order to evaluate the technical and operational problem areas and provide a recommendation, the enhancements to the Tracking and Data Delay Satellite System (TDRSS) and Shuttle must be evaluated through simulation and analysis. These enhancement techniques must first be characterized, then modeled mathematically, and finally updated into LinCsim (analytical simulation package). The LinCsim package can then be used as an evaluation tool. Three areas of potential enhancements were identified: shuttle payload accommodations, TDRSS SSA and KSA services, and shuttle tracking system and navigation sensors. Recommendations for each area were discussed.

  14. Design and Analysis of Windmill Simulation and Pole by Solidwork Program

    NASA Astrophysics Data System (ADS)

    Mulyana, Tatang; Sebayang, Darwin; R, Akmal Muamar. D.; A, Jauharah H. D.; Yahya Shomit, M.

    2018-03-01

    The Indonesian state of archipelago has great wind energy potential. For micro-scale power generation, the energy obtained from the windmill can be connected directly to the electrical load and can be used without problems. However, for macro-scale power generation, problems will arise such as the design of vane shapes, there should be a simulation and an accurate experiment to produce blades with a special shape that can capture wind energy. In addition, daily and yearly wind and wind rate calculations are also required to ensure the best latitude and longitude positions for building windmills. This paper presents a solution to solve the problem of how to produce a windmill which in the builder is very practical and very mobile can be moved its location. Before a windmill prototype is built it should have obtained the best windmill design result. Therefore, the simulation of the designed windmill is of crucial importance. Solid simulation express is a tool that serves to generate simulation of a design. Some factors that can affect a design result include the power part and the rest part of the part, material selection, the load is given, the security of the design power made, and changes in shape due to treat the load given to the design made. In this paper, static and thermal simulations of windmills have been designed. Based on the simulation result on the designed windmill, it shows that the design has been made very satisfactory so that it can be done prototyping fabrication process.

  15. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Chueng, Kar-Ming

    2008-01-01

    The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.

  16. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  17. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  18. Influence of Multidimensionality on Convergence of Sampling in Protein Simulation

    NASA Astrophysics Data System (ADS)

    Metsugi, Shoichi

    2005-06-01

    We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

  19. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  20. Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH

    NASA Astrophysics Data System (ADS)

    Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.

    2018-01-01

    Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.

  1. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  2. Dynamic analysis of space structures including elastic, multibody, and control behavior

    NASA Technical Reports Server (NTRS)

    Pinson, Larry; Soosaar, Keto

    1989-01-01

    The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.

  3. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  4. An examination of problem-based teaching and learning in population genetics and evolution using EVOLVE, a computer simulation

    NASA Astrophysics Data System (ADS)

    Soderberg, Patti; Price, Frank

    2003-01-01

    This study describes a lesson in which students engaged in inquiry in evolutionary biology in order to develop a better understanding of the concepts and reasoning skills necessary to support knowledge claims about changes in the genetic structure of populations, also known as microevolution. This paper describes how a software simulation called EVOLVE can be used to foster discussions about the conceptual knowledge used by advanced secondary or introductory college students when investigating the effects of natural selection on hypothetical populations over time. An experienced professor's use and rationale of a problem-based lesson using the simulation is examined. Examples of student misconceptions and naïve (incomplete) conceptions are described and an analysis of the procedural knowledge for experimenting with the computer model is provided. The results of this case study provide a model of how EVOLVE can be used to engage students in a complex problem-solving experience that encourages student meta-cognitive reflection about their understanding of evolution at the population level. Implications for teaching are provided and ways to improve student learning and problem solving in population genetics are suggested.

  5. Principle Component Analysis with Incomplete Data: A simulation of R pcaMethods package in Constructing an Environmental Quality Index with Missing Data

    EPA Science Inventory

    Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...

  6. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Klimeck, G.; Hanks, D.

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.

  7. Design and simulation of a gyroklystron amplifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauhan, M. S., E-mail: mschauhan.rs.ece@iitbhu.ac.in; Swati, M. V.; Jain, P. K.

    2015-03-15

    In the present paper, a design methodology of the gyroklystron amplifier has been described and subsequently used for the design of a typically selected 200 kW, Ka-band, four-cavity gyroklystron amplifier. This conceptual device design has been validated through the 3D particle-in-cell (PIC) simulation and nonlinear analysis. Commercially available PIC simulation code “MAGIC” has been used for the electromagnetic study at the different location of the device RF interaction structure for the beam-absent case, i.e., eigenmode study as well as for the electron beam and RF wave interaction behaviour study in the beam present case of the gyroklystron. In addition, a practicalmore » problem of misalignment of the RF cavities with drift tubes within the tube has been also investigated and its effect on device performance studied. The analytical and simulation results confirmed the validity of the gyroklystron device design. The PIC simulation results of the present gyroklystron produced a stable RF output power of ∼218 kW for 0% velocity spread at 35 GHz, with ∼45 dB gain, 37% efficiency, and a bandwidth of 0.3% for a 70 kV, 8.2 A gyrating electron beam. The simulated values of RF output power have been found in agreement with the nonlinear analysis results within ∼5%. Further, the PIC simulation has been extended to study a practical problem of misalignment of the cavities axis and drift tube axis of the gyroklystron amplifier and found that the RF output power is more sensitive to misalignments in comparison to the device bandwidth. The present paper, gyroklystron device design, nonlinear analysis, and 3D PIC simulation using commercially available code had been systematically described would be of use to the high-power gyro-amplifier tube designers and research scientists.« less

  8. Teaching Workflow Analysis and Lean Thinking via Simulation: A Formative Evaluation

    PubMed Central

    Campbell, Robert James; Gantt, Laura; Congdon, Tamara

    2009-01-01

    This article presents the rationale for the design and development of a video simulation used to teach lean thinking and workflow analysis to health services and health information management students enrolled in a course on the management of health information. The discussion includes a description of the design process, a brief history of the use of simulation in healthcare, and an explanation of how video simulation can be used to generate experiential learning environments. Based on the results of a survey given to 75 students as part of a formative evaluation, the video simulation was judged effective because it allowed students to visualize a real-world process (concrete experience), contemplate the scenes depicted in the video along with the concepts presented in class in a risk-free environment (reflection), develop hypotheses about why problems occurred in the workflow process (abstract conceptualization), and develop solutions to redesign a selected process (active experimentation). PMID:19412533

  9. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  10. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  11. Mapping an Experiment-Based Assessment of Collaborative Behavior onto Collaborative Problem Solving in PISA 2015: A Cluster Analysis Approach for Collaborator Profiles

    ERIC Educational Resources Information Center

    Herborn, Katharina; Mustafic, Maida; Greiff, Samuel

    2017-01-01

    Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…

  12. MSC products for the simulation of tire behavior

    NASA Technical Reports Server (NTRS)

    Muskivitch, John C.

    1995-01-01

    The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.

  13. Lattice Commissioning Stretgy Simulation for the B Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.; Whittum, D.; Yan, Y.

    2011-08-26

    To prepare for the PEP-II turn on, we have studied one commissioning strategy with simulated lattice errors. Features such as difference and absolute orbit analysis and correction are discussed. To prepare for the commissioning of the PEP-II injection line and high energy ring (HER), we have developed a system for on-line orbit analysis by merging two existing codes: LEGO and RESOLVE. With the LEGO-RESOLVE system, we can study the problem of finding quadrupole alignment and beam position (BPM) offset errors with simulated data. We have increased the speed and versatility of the orbit analysis process by using a command filemore » written in a script language designed specifically for RESOLVE. In addition, we have interfaced the LEGO-RESOLVE system to the control system of the B-Factory. In this paper, we describe online analysis features of the LEGO-RESOLVE system and present examples of practical applications.« less

  14. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  15. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  16. Verification assessment of piston boundary conditions for Lagrangian simulation of compressible flow similarity solutions

    DOE PAGES

    Ramsey, Scott D.; Ivancic, Philip R.; Lilieholm, Jennifer F.

    2015-12-10

    This work is concerned with the use of similarity solutions of the compressible flow equations as benchmarks or verification test problems for finite-volume compressible flow simulation software. In practice, this effort can be complicated by the infinite spatial/temporal extent of many candidate solutions or “test problems.” Methods can be devised with the intention of ameliorating this inconsistency with the finite nature of computational simulation; the exact strategy will depend on the code and problem archetypes under investigation. For example, self-similar shock wave propagation can be represented in Lagrangian compressible flow simulations as rigid boundary-driven flow, even if no such “piston”more » is present in the counterpart mathematical similarity solution. The purpose of this work is to investigate in detail the methodology of representing self-similar shock wave propagation as a piston-driven flow in the context of various test problems featuring simple closed-form solutions of infinite spatial/temporal extent. The closed-form solutions allow for the derivation of similarly closed-form piston boundary conditions (BCs) for use in Lagrangian compressible flow solvers. Finally, the consequences of utilizing these BCs (as opposed to directly initializing the self-similar solution in a computational spatial grid) are investigated in terms of common code verification analysis metrics (e.g., shock strength/position errors and global convergence rates).« less

  17. Verification assessment of piston boundary conditions for Lagrangian simulation of compressible flow similarity solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Ivancic, Philip R.; Lilieholm, Jennifer F.

    This work is concerned with the use of similarity solutions of the compressible flow equations as benchmarks or verification test problems for finite-volume compressible flow simulation software. In practice, this effort can be complicated by the infinite spatial/temporal extent of many candidate solutions or “test problems.” Methods can be devised with the intention of ameliorating this inconsistency with the finite nature of computational simulation; the exact strategy will depend on the code and problem archetypes under investigation. For example, self-similar shock wave propagation can be represented in Lagrangian compressible flow simulations as rigid boundary-driven flow, even if no such “piston”more » is present in the counterpart mathematical similarity solution. The purpose of this work is to investigate in detail the methodology of representing self-similar shock wave propagation as a piston-driven flow in the context of various test problems featuring simple closed-form solutions of infinite spatial/temporal extent. The closed-form solutions allow for the derivation of similarly closed-form piston boundary conditions (BCs) for use in Lagrangian compressible flow solvers. Finally, the consequences of utilizing these BCs (as opposed to directly initializing the self-similar solution in a computational spatial grid) are investigated in terms of common code verification analysis metrics (e.g., shock strength/position errors and global convergence rates).« less

  18. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  19. Xyce Parallel Electronic Simulator Users' Guide Version 6.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  20. Discrete-event system simulation on small and medium enterprises productivity improvement

    NASA Astrophysics Data System (ADS)

    Sulistio, J.; Hidayah, N. A.

    2017-12-01

    Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.

  1. Simulation Study on Missile Penetration Based on LS - DYNA

    NASA Astrophysics Data System (ADS)

    Tang, Jue; Sun, Xinli

    2017-12-01

    Penetrating the shell armor is an effective means of destroying hard targets with multiple layers of protection. The penetration process is a high-speed impact dynamics research category, involving high pressure, high temperature, high speed and internal material damage, including plugging, penetration, spalling, caving, splashing and other complex forms, therefore, Analysis is one of the difficulties in the study of impact dynamics. In this paper, the Lagrang algorithm and the SPH algorithm are used to analyze the penetrating steel plate, and the penetration model of the rocket penetrating the steel plate, the failure mode of the steel plate and the missile and the advantages and disadvantages of Lagrang algorithm and SPH algorithm in the simulation of high-speed collision problem are analyzed and compared, which provides a reference for the study of simulation collision problem.

  2. Development of Constraint Force Equation Methodology for Application to Multi-Body Dynamics Including Launch Vehicle Stage Seperation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.

    2016-01-01

    The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.

  3. OASIS - ORBIT ANALYSIS AND SIMULATION SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1994-01-01

    The Orbit Analysis and Simulation Software, OASIS, is a software system developed for covariance and simulation analyses of problems involving earth satellites, especially the Global Positioning System (GPS). It provides a flexible, versatile and efficient accuracy analysis tool for earth satellite navigation and GPS-based geodetic studies. To make future modifications and enhancements easy, the system is modular, with five major modules: PATH/VARY, REGRES, PMOD, FILTER/SMOOTHER, and OUTPUT PROCESSOR. PATH/VARY generates satellite trajectories. Among the factors taken into consideration are: 1) the gravitational effects of the planets, moon and sun; 2) space vehicle orientation and shapes; 3) solar pressure; 4) solar radiation reflected from the surface of the earth; 5) atmospheric drag; and 6) space vehicle gas leaks. The REGRES module reads the user's input, then determines if a measurement should be made based on geometry and time. PMOD modifies a previously generated REGRES file to facilitate various analysis needs. FILTER/SMOOTHER is especially suited to a multi-satellite precise orbit determination and geodetic-type problems. It can be used for any situation where parameters are simultaneously estimated from measurements and a priori information. Examples of nonspacecraft areas of potential application might be Very Long Baseline Interferometry (VLBI) geodesy and radio source catalogue studies. OUTPUT PROCESSOR translates covariance analysis results generated by FILTER/SMOOTHER into user-desired easy-to-read quantities, performs mapping of orbit covariances and simulated solutions, transforms results into different coordinate systems, and computes post-fit residuals. The OASIS program was developed in 1986. It is designed to be implemented on a DEC VAX 11/780 computer using VAX VMS 3.7 or higher. It can also be implemented on a Micro VAX II provided sufficient disk space is available.

  4. Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening

    PubMed Central

    Pan, Rui; Wang, Hansheng; Li, Runze

    2016-01-01

    This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109

  5. On the Role of Surface Friction in Tropical Intraseasonal Oscillation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode

    1999-01-01

    The Madden-Julian oscillation (MJO), or the tropical intraseasonal oscillation, has attracted much attention, ever since its discovery in the early seventies for reasons of both scientific understanding and practical forecasts. Among the theoretical interpretations of the MJO, the wave-CISK (conditional instability of the second kind) mechanism is the most popular. The basic idea of the wave-CISK interpretation is that the cooperation between the low-level convergence associated with the eastward moving Kelvin wave and the cumulus convection generates an eastward moving Kelvin-wave-like mode. Later it was recognized that the MJO has an important Rossby-wave-like component. However linear analysis and numerical simulations based on it (even when conditional heating is used) have revealed two problems with the wave-CISK interpretation; i.e., excessive speed and the most preferred scale being zero or grid scale. Chao (1995) presented a discussion of these problems and attributed these problems to the particular type of expression for the cumulus heating used in the linear analyses and numerical studies (i.e., the convective heating is proportional to low-level convergence and a fixed vertical heating profile). It should be pointed out that in the relatively successful simulation of MJO with general circulation models the problem of grid scale being the most preferred scale does not appear and die problem of excessive speed is not as severe as in the linear analysis.

  6. Dakota Graphical User Interface v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest; Glickman, Matthew; Gibson, Marcus

    Graphical analysis environment for Sandia’s Dakota software for optimization and uncertainty quantification. The Dakota GUI is an interactive graphical analysis environment for creating, running, and interpreting Dakota optimization and uncertainty quantification studies. It includes problem (Dakota study) set-up, option specification, simulation interfacing, analysis execution, and results visualization. Through the use of wizards, templates, and views, Dakota GUI helps uses navigate Dakota’s complex capability landscape.

  7. Computational Analysis and Simulation of Empathic Behaviors: a Survey of Empathy Modeling with Behavioral Signal Processing Framework.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S

    2016-05-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.

  8. Exploring Young People's Civic Identities through Gamification: A Case Study of Finnish, Swedish and Norwegian Adolescents Playing a Social Simulation Game

    ERIC Educational Resources Information Center

    Eränpalo, Tommi

    2014-01-01

    This article is based on a case study where groups of Finnish, Swedish and Norwegian young people played a simulation game that stimulated collective deliberation on social issues. The game has been designed to provoke students to deliberate and to reflect on social problems relating to issues of citizenship and democracy. The analysis of the…

  9. Dynamic analysis of flexible mechanical systems using LATDYN

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Chang, Che-Wei; Housner, Jerrold M.

    1989-01-01

    A 3-D, finite element based simulation tool for flexible multibody systems is presented. Hinge degrees-of-freedom is built into equations of motion to reduce geometric constraints. The approach avoids the difficulty in selecting deformation modes for flexible components by using assumed mode method. The tool is applied to simulate a practical space structure deployment problem. Results of examples demonstrate the capability of the code and approach.

  10. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  11. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  12. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity where more and more complex flow problems can be tackled with this approach. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by a contra-rotating open rotor. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the methodologies of how to apply the immersed boundary method to this moving boundary problem, we will provide a detailed validation of the aeroacoustic analysis approach employing the Launch Ascent and Vehicle Aerodynamics (LAVA) solver. Two free-stream Mach numbers with M=0.2 and M=0.78 are considered in this analysis that are based on the nominally take-off and cruise flow conditions. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. Spectral analysis is used to determine the dominant wave propagation pattern in the acoustic near-field.

  13. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  14. Theoretical analysis of the mechanisms of a gender differentiation in the propensity for orthostatic intolerance after spaceflight

    PubMed Central

    2010-01-01

    Background A tendency to develop reentry orthostasis after a prolonged exposure to microgravity is a common problem among astronauts. The problem is 5 times more prevalent in female astronauts as compared to their male counterparts. The mechanisms responsible for this gender differentiation are poorly understood despite many detailed and complex investigations directed toward an analysis of the physiologic control systems involved. Methods In this study, a series of computer simulation studies using a mathematical model of cardiovascular functioning were performed to examine the proposed hypothesis that this phenomenon could be explained by basic physical forces acting through the simple common anatomic differences between men and women. In the computer simulations, the circulatory components and hydrostatic gradients of the model were allowed to adapt to the physical constraints of microgravity. After a simulated period of one month, the model was returned to the conditions of earth's gravity and the standard postflight tilt test protocol was performed while the model output depicting the typical vital signs was monitored. Conclusions The analysis demonstrated that a 15% lowering of the longitudinal center of gravity in the anatomic structure of the model was all that was necessary to prevent the physiologic compensatory mechanisms from overcoming the propensity for reentry orthostasis leading to syncope. PMID:20298577

  15. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    NASA Astrophysics Data System (ADS)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-09-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  16. Multi-dimensional high order essentially non-oscillatory finite difference methods in generalized coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1992-01-01

    The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.

  17. Solution of AntiSeepage for Mengxi River Based on Numerical Simulation of Unsaturated Seepage

    PubMed Central

    Ji, Youjun; Zhang, Linzhi; Yue, Jiannan

    2014-01-01

    Lessening the leakage of surface water can reduce the waste of water resources and ground water pollution. To solve the problem that Mengxi River could not store water enduringly, geology investigation, theoretical analysis, experiment research, and numerical simulation analysis were carried out. Firstly, the seepage mathematical model was established based on unsaturated seepage theory; secondly, the experimental equipment for testing hydraulic conductivity of unsaturated soil was developed to obtain the curve of two-phase flow. The numerical simulation of leakage in natural conditions proves the previous inference and leakage mechanism of river. At last, the seepage control capacities of different impervious materials were compared by numerical simulations. According to the engineering actuality, the impervious material was selected. The impervious measure in this paper has been proved to be effectible by hydrogeological research today. PMID:24707199

  18. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  19. OpenGeoSys: Performance-Oriented Computational Methods for Numerical Modeling of Flow in Large Hydrogeological Systems

    NASA Astrophysics Data System (ADS)

    Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.

    2014-12-01

    OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.

  20. Developing a Problem-Based Learning Simulation: An Economics Unit on Trade

    ERIC Educational Resources Information Center

    Maxwell, Nan L.; Mergendoller, John R.; Bellisimo, Yolanda

    2004-01-01

    This article argues that the merger of simulations and problem-based learning (PBL) can enhance both active-learning strategies. Simulations benefit by using a PBL framework to promote student-directed learning and problem-solving skills to explain a simulated dilemma with multiple solutions. PBL benefits because simulations structure the…

  1. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  2. Projection-based stabilization of interface Lagrange multipliers in immersogeometric fluid-thin structure interaction analysis, with application to heart valve modeling.

    PubMed

    Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri

    2017-11-01

    This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.

  3. Simulation of realistic abnormal SPECT brain perfusion images: application in semi-quantitative analysis

    NASA Astrophysics Data System (ADS)

    Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.

    2005-11-01

    Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.

  4. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  5. Numerical Simulation of Cast Distortion in Gas Turbine Engine Components

    NASA Astrophysics Data System (ADS)

    Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.

    2015-06-01

    In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.

  6. Application of the GERTS II simulator in the industrial environment.

    NASA Technical Reports Server (NTRS)

    Whitehouse, G. E.; Klein, K. I.

    1971-01-01

    GERT was originally developed to aid in the analysis of stochastic networks. GERT can be used to graphically model and analyze complex systems. Recently a simulator model, GERTS II, has been developed to solve GERT Networks. The simulator language used in the development of this model was GASP II A. This paper discusses the possible application of GERTS II to model and analyze (1) assembly line operations, (2) project management networks, (3) conveyor systems and (4) inventory systems. Finally, an actual application dealing with a job shop loading problem is presented.

  7. Improved mapping of the travelling salesman problem for quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David

    2015-03-01

    We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.

  8. Visualization Techniques Applied to 155-mm Projectile Analysis

    DTIC Science & Technology

    2014-11-01

    semi-infinite Riemann problems are used in CFD++ to provide upwind flux information to the underlying transport scheme. Approximate Riemann solvers ...characteristics-based inflow/outflow boundary condition, which is based on solving a Riemann problem at the boundary. 2.3 Numerics Rolling/spinning is the...the solution files generated by the computational fluid dynamics (CFD) solver for the time-accurate rolling simulations at each timestep for the Mach

  9. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  10. Finite-difference model to simulate the areal flow of saltwater and fresh water separated by an interface

    USGS Publications Warehouse

    Mercer, James W.; Larson, S.P.; Faust, Charles R.

    1980-01-01

    Model documentation is presented for a two-dimensional (areal) model capable of simulating ground-water flow of salt water and fresh water separated by an interface. The partial differential equations are integrated over the thicknesses of fresh water and salt water resulting in two equations describing the flow characteristics in the areal domain. These equations are approximated using finite-difference techniques and the resulting algebraic equations are solved for the dependent variables, fresh water head and salt water head. An iterative solution method was found to be most appropriate. The program is designed to simulate time-dependent problems such as those associated with the development of coastal aquifers, and can treat water-table conditions or confined conditions with steady-state leakage of fresh water. The program will generally be most applicable to the analysis of regional aquifer problems in which the zone between salt water and fresh water can be considered a surface (sharp interface). Example problems and a listing of the computer code are included. (USGS).

  11. An Integrated Analysis of the Physiological Effects of Space Flight: Executive Summary

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    A large array of models were applied in a unified manner to solve problems in space flight physiology. Mathematical simulation was used as an alternative way of looking at physiological systems and maximizing the yield from previous space flight experiments. A medical data analysis system was created which consist of an automated data base, a computerized biostatistical and data analysis system, and a set of simulation models of physiological systems. Five basic models were employed: (1) a pulsatile cardiovascular model; (2) a respiratory model; (3) a thermoregulatory model; (4) a circulatory, fluid, and electrolyte balance model; and (5) an erythropoiesis regulatory model. Algorithms were provided to perform routine statistical tests, multivariate analysis, nonlinear regression analysis, and autocorrelation analysis. Special purpose programs were prepared for rank correlation, factor analysis, and the integration of the metabolic balance data.

  12. An efficiency improvement in warehouse operation using simulation analysis

    NASA Astrophysics Data System (ADS)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  13. Applications of numerical methods to simulate the movement of contaminants in groundwater.

    PubMed Central

    Sun, N Z

    1989-01-01

    This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327

  14. A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.

    PubMed

    Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha

    2017-04-01

    To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.

  15. Kinematic simulation and analysis of robot based on MATLAB

    NASA Astrophysics Data System (ADS)

    Liao, Shuhua; Li, Jiong

    2018-03-01

    The history of industrial automation is characterized by quick update technology, however, without a doubt, the industrial robot is a kind of special equipment. With the help of MATLAB matrix and drawing capacity in the MATLAB environment each link coordinate system set up by using the d-h parameters method and equation of motion of the structure. Robotics, Toolbox programming Toolbox and GUIDE to the joint application is the analysis of inverse kinematics and path planning and simulation, preliminary solve the problem of college students the car mechanical arm positioning theory, so as to achieve the aim of reservation.

  16. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  17. Rational Solutions for Challenges of the New Mellennium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gover, J.; Guray, P.G.

    We have reviewed ten major public problems challenging our Nation as it enters the new millennium. These are defense, healthcare costs, education, aging population, energy and environment, crime, low productivity growth services, income distribution, regulations, and infrastructure. These problems share several features. First, each is so large, if it were soIved; it would have major impact on the U.S. economy. Second, each is resident in a socioeconomic system containing non-linear feedback loops and an adaptive human element. Third, each can only be solved by our political system, yet these problems are not responsive to piecemeal problem solving, the approach traditionallymore » used by policy makers. However, unless each problem is addressed in the context of the system in which it resides, the solution maybe worse than the problem. Our political system is immersed in reams of disconnected, unintelligible information skewed by various special interests to suggest policies favoring their particular needs. Help is needed, if rational solutions that serve public interests are to be forged for these ten probIems, The simulation and modeIing tools of physical scientists, engineers, economists, social scientists, public policy experts, and others, bolstered by the recent explosive growth in massively parallel computing power, must be blended together to synthesize models of the complex systems in which these problems are resident. These models must simulate the seemingly chaotic human element inherent in these systems and support policymakers in making informed decKlons about the future. We propose altering the policy development process by incorporating more modeling, simulation and analysis to bring about a revolution in policy making that takes advantage of the revolution in engineering emerging from simulation and modeling. While we recommend major research efforts to address each of these problems, we also observe these to be very complex, highly interdependent, multi-disciplinary problems; it will challenge the U.S. community of individual investigator researchers to make the cultural transformation necessary to address these problems in a team environment. Furthermore, models that simulate future behavior of these complex systems will not be exacq therefore, researchers must be prepared to use the modeling and simulation tools they develop to propose experiments to Congress. We recommend that ten laboratories owned by the American public be selected in an interagency competition to each manage and host a $1 billion/yertr National effort, each focused on one of these ten problems. Much of the supporting research and subsystem modeling work will be conducted at U.S. universities and at private firms with relevant expertise. Success of the Manhattan Project at the middle of the 20th century provides evidence this leadership model works.« less

  18. SYSTID - A flexible tool for the analysis of communication systems.

    NASA Technical Reports Server (NTRS)

    Dawson, C. T.; Tranter, W. H.

    1972-01-01

    Description of the System Time Domain Simulation (SYSTID) computer-aided analysis program which is specifically structured for communication systems analysis. The SYSTID program is user oriented so that very little knowledge of computer techniques and very little programming ability are required for proper application. The program is designed so that the user can go from a system block diagram to an accurate simulation by simply programming a single English language statement for each block in the system. The mathematical and functional models available in the SYSTID library are presented. An example problem is given which illustrates the ease of modeling communication systems. Examples of the outputs available are presented, and proposed improvements are summarized.

  19. Use of Monte Carlo simulation for the interpretation and analysis of diffuse scattering

    NASA Astrophysics Data System (ADS)

    Welberry, T. R.; Chan, E. J.; Goossens, D. J.; Heerdegen, A. P.

    2010-02-01

    With the development of computer simulation methods there is, for the first time, the possibility of having a single general method that can be used for any diffuse scattering problem in any type of system. As computers get ever faster it is expected that current methods will become increasingly powerful and applicable to a wider and wider range of problems and materials and provide results in increasingly fine detail. In this article we discuss two contrasting recent examples. The first is concerned with the two polymorphic forms of the pharmaceutical compound benzocaine. The strong and highly structured diffuse scattering in these is shown to be symptomatic of the presence of highly correlated molecular motions. The second concerns Ag+ fast ion conduction in the pearceite/polybasite family of mineral solid electrolytes. Here Monte-Carlo simulation is used to model the diffuse scattering and gain insight into how the ionic conduction arises.

  20. APPLICATION OF FLOW SIMULATION FOR EVALUATION OF FILLING-ABILITY OF SELF-COMPACTING CONCRETE

    NASA Astrophysics Data System (ADS)

    Urano, Shinji; Nemoto, Hiroshi; Sakihara, Kohei

    In this paper, MPS method was applied to fluid an alysis of self-compacting concrete. MPS method is one of the particle method, and it is suitable for the simulation of moving boundary or free surface problems and large deformation problems. The constitutive equation of self-compacting concrete is assumed as bingham model. In order to investigate flow Stoppage and flow speed of self-compacting concrete, numerical analysis examples of slump flow and L-flow test were performed. In addition, to evaluate verification of compactability of self-compacting concrete, numerical analys is examples of compaction at the part of CFT diaphragm were performed. As a result, it was found that the MPS method was suitable for the simulation of compaction of self-compacting concrete, and a just appraisal was obtained by setting shear strain rate of flow-limit πc and limitation point of segregation.

  1. Analytical solutions for coagulation and condensation kinetics of composite particles

    NASA Astrophysics Data System (ADS)

    Piskunov, Vladimir N.

    2013-04-01

    The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peratt, A.L.; Mostrom, M.A.

    With the availability of 80--125 MHz microprocessors, the methodology developed for the simulation of problems in pulsed power and plasma physics on modern day supercomputers is now amenable to application on a wide range of platforms including laptops and workstations. While execution speeds with these processors do not match those of large scale computing machines, resources such as computer-aided-design (CAD) and graphical analysis codes are available to automate simulation setup and process data. This paper reports on the adaptation of IVORY, a three-dimensional, fully-electromagnetic, particle-in-cell simulation code, to this platform independent CAD environment. The primary purpose of this talk ismore » to demonstrate how rapidly a pulsed power/plasma problem can be scoped out by an experimenter on a dedicated workstation. Demonstrations include a magnetically insulated transmission line, power flow in a graded insulator stack, a relativistic klystron oscillator, and the dynamics of a coaxial thruster for space applications.« less

  3. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.

  4. The problem of fuzzy cause-specific death rates in mortality context analysis: the case of Panama City.

    PubMed

    Bock, S; Gans, P

    1993-05-01

    In studies of mortality, small and fluctuating numbers of deaths are problems which are caused by infrequent reporting and small spatial unit reporting. To use Panama City as an example, the paper will introduce a Monte Carlo simulation which allows for the analysis of mortality even with small absolute numbers. In addition, Panama City will be used as an example where good medical care is available in every city district, so that social class differences between the districts have a negligible effect on most cause-specific death rates and infant mortality.

  5. Defect-phase-dynamics approach to statistical domain-growth problem of clock models

    NASA Technical Reports Server (NTRS)

    Kawasaki, K.

    1985-01-01

    The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.

  6. Aerodynamic preliminary analysis system. Part 2: User's manual and program description

    NASA Technical Reports Server (NTRS)

    Divan, P.; Dunn, K.; Kojima, J.

    1978-01-01

    A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple nonplanar surfaces of arbitrary planform and open or closed slender bodies or noncircular contour are analyzed. Longitudinal and lateral-directional static and rotary derivative solutions are generated. The analysis is implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.

  7. Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method

    NASA Astrophysics Data System (ADS)

    Ciszkiewicz, A.; Knapczyk, J.

    2015-08-01

    Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.

  8. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  9. Fast calculation of the sensitivity matrix in magnetic induction tomography by tetrahedral edge finite elements and the reciprocity theorem.

    PubMed

    Hollaus, K; Magele, C; Merwa, R; Scharfetter, H

    2004-02-01

    Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.

  10. A Simulation Based Approach to Optimize Berth Throughput Under Uncertainty at Marine Container Terminals

    NASA Technical Reports Server (NTRS)

    Golias, Mihalis M.

    2011-01-01

    Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.

  11. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    NASA Astrophysics Data System (ADS)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  12. Dynamic simulations of geologic materials using combined FEM/DEM/SPH analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, J P; Johnson, S M

    2008-03-26

    An overview of the Lawrence Discrete Element Code (LDEC) is presented, and results from a study investigating the effect of explosive and impact loading on geologic materials using the Livermore Distinct Element Code (LDEC) are detailed. LDEC was initially developed to simulate tunnels and other structures in jointed rock masses using large numbers of polyhedral blocks. Many geophysical applications, such as projectile penetration into rock, concrete targets, and boulder fields, require a combination of continuum and discrete methods in order to predict the formation and interaction of the fragments produced. In an effort to model this class of problems, LDECmore » now includes implementations of Cosserat point theory and cohesive elements. This approach directly simulates the transition from continuum to discontinuum behavior, thereby allowing for dynamic fracture within a combined finite element/discrete element framework. In addition, there are many application involving geologic materials where fluid-structure interaction is important. To facilitate solution of this class of problems a Smooth Particle Hydrodynamics (SPH) capability has been incorporated into LDEC to simulate fully coupled systems involving geologic materials and a saturating fluid. We will present results from a study of a broad range of geomechanical problems that exercise the various components of LDEC in isolation and in tandem.« less

  13. Some Current Problems in Simulator Design, Testing and Use.

    ERIC Educational Resources Information Center

    Caro, Paul W.

    Concerned with the general problem of the effectiveness of simulator training, this report reflects information developed during the conduct of aircraft simulator training research projects sponsored by the Air Force, Army, Navy, and Coast Guard. Problems are identified related to simulator design, testing, and use, all of which impact upon…

  14. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  15. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  16. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    PubMed

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  17. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  18. Computational Analysis and Simulation of Empathic Behaviors: A Survey of Empathy Modeling with Behavioral Signal Processing Framework

    PubMed Central

    Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis; Atkins, David C.; Narayanan, Shrikanth S.

    2017-01-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation, and offer a series of open problems for future research. PMID:27017830

  19. Forest management and economics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buongiorno, J.; Gilless, J.K.

    1987-01-01

    This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.

  20. Modeling and Simulation of Avionics Systems and Command, Control and Communications Systems

    DTIC Science & Technology

    1980-01-01

    analytical and operational talent into a cohesive study group . This group becomes our critical mass for innovative analysis. For command and control problems...that focusing small integrated groups on specific aspects of a command and control problem sucoseds best. For example, Air Force Studies and Analyses...phase so called " study groups " should define "tactical requirement-papers", These study groups will be supported by operational analyses and by

  1. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less

  2. Computer simulations and experimental study on crash box of automobile in low speed collision

    NASA Astrophysics Data System (ADS)

    Liu, Yanjie; Ding, Lin; Yan, Shengyuan; Yang, Yongsheng

    2008-11-01

    Based on the problems of energy-absorbing components in the automobile low speed collision process, according to crash box frontal crash test in low speed as the example, the simulation analysis of crash box impact process was carried out by Hyper Mesh and LS-DYNA. Each parameter on the influence modeling was analyzed by mathematics analytical solution and test comparison, which guaranteed that the model was accurate. Combination of experiment and simulation result had determined the weakness part of crash box structure crashworthiness aspect, and improvement method of crash box crashworthiness was discussed. Through numerical simulation of the impact process of automobile crash box, the obtained analysis result was used to optimize the design of crash box. It was helpful to improve the vehicles structure and decrease the collision accident loss at most. And it was also provided a useful method for the further research on the automobile collision.

  3. Lightweight Object Oriented Structure analysis: Tools for building Tools to Analyze Molecular Dynamics Simulations

    PubMed Central

    Romo, Tod D.; Leioatts, Nicholas; Grossfield, Alan

    2014-01-01

    LOOS (Lightweight Object-Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 120 pre-built tools, including suites of tools for analyzing simulation convergence, 3D histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only 4 core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. PMID:25327784

  4. Analysis and numerical simulation research of the heating process in the oven

    NASA Astrophysics Data System (ADS)

    Chen, Yawei; Lei, Dingyou

    2016-10-01

    How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.

  5. Lightweight object oriented structure analysis: tools for building tools to analyze molecular dynamics simulations.

    PubMed

    Romo, Tod D; Leioatts, Nicholas; Grossfield, Alan

    2014-12-15

    LOOS (Lightweight Object Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 140 prebuilt tools, including suites of tools for analyzing simulation convergence, three-dimensional histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only four core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. © 2014 Wiley Periodicals, Inc.

  6. On-Orbit Range Set Applications

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    2011-09-01

    History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.

  7. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  8. Potential applications of computational fluid dynamics to biofluid analysis

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.

    1988-01-01

    Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.

  9. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  10. A Crisis in Space--A Futuristic Simulation Using Creative Problem Solving.

    ERIC Educational Resources Information Center

    Clode, Linda

    1992-01-01

    An enrichment program developed for sixth-grade gifted students combined creative problem solving with future studies in a way that would simulate real life crisis problem solving. The program involved forecasting problems of the future requiring evacuation of Earth, assuming roles on a spaceship, and simulating crises as the spaceship traveled to…

  11. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  12. Apollo experience report: Guidance and control systems. Engineering simulation program

    NASA Technical Reports Server (NTRS)

    Gilbert, D. W.

    1973-01-01

    The Apollo Program experience from early 1962 to July 1969 with respect to the engineering-simulation support and the problems encountered is summarized in this report. Engineering simulation in support of the Apollo guidance and control system is discussed in terms of design analysis and verification, certification of hardware in closed-loop operation, verification of hardware/software compatibility, and verification of both software and procedures for each mission. The magnitude, time, and cost of the engineering simulations are described with respect to hardware availability, NASA and contractor facilities (for verification of the command module, the lunar module, and the primary guidance, navigation, and control system), and scheduling and planning considerations. Recommendations are made regarding implementation of similar, large-scale simulations for future programs.

  13. Pattern Formation in Complex Fluids

    NASA Astrophysics Data System (ADS)

    Shelley, Michael

    2000-03-01

    Classical fluid instabilities -- such as the Saffman-Taylor instability in a Hele-Shaw cell -- are dramatically modified by using complex fluids. For example, polymeric liquids driven in a Hele-Shaw cell yield "dendritic" patterns with an apparent directional anisotropy. The dynamics of complex liquids can also lead to new instabilities and patterns, such as space-filling patterns formed by successive bucklings of growing "elastica" seen in the phase transition of a liquid crystalline material. Understanding such problems requires an interplay between physical modeling, mathematical analysis, and sophisticated nonlinear simulation. For the first problem, I will discuss a non-Newtonian version of Darcy's law for Hele-Shaw flow. This yields a free-boundary problem for the pattern formation, and requires the solution of a nonlinear elliptic equation in a time-dependent domain. This is pushing the development of adaptive grid methods that represent the geometry accurately and efficiently. Our simulations yield insight into how shear-thinning, as is evinced by polymeric liquids, can produce patterns reminiscent of experiment, with "dendritic fingers", side-branching, and reduced tip-splitting. In the second problem, a long filament in a smectic-A phase grows within an isotropic fluid. The splay deformation of the material gives this filament an elastic response. The macroscopic model describes the dynamics of a growing, elastic filament immersed in a Stokesian fluid. The model marries filament elasticity and tensile forces with a numerically tractable nonlocal slender-body theory. Analysis shows that growth of the filament, despite fluid drag, produces a buckling instability. When coupled to a nonlocal hydrodynamic self-interaction, our fully nonlinear simulations show that such instabilities iterate along the filament, and give "space-filling" patterns.

  14. Multigrid accelerated simulations for Twisted Mass fermions

    NASA Astrophysics Data System (ADS)

    Bacchio, Simone; Alexandrou, Constantia; Finkerath, Jacob

    2018-03-01

    Simulations at physical quark masses are affected by the critical slowing down of the solvers. Multigrid preconditioning has proved to deal effectively with this problem. Multigrid accelerated simulations at the physical value of the pion mass are being performed to generate Nf = 2 and Nf = 2 + 1 + 1 gauge ensembles using twisted mass fermions. The adaptive aggregation-based domain decomposition multigrid solver, referred to as DD-αAMG method, is employed for these simulations. Our simulation strategy consists of an hybrid approach of different solvers, involving the Conjugate Gradient (CG), multi-mass-shift CG and DD-αAMG solvers. We present an analysis of the multigrid performance during the simulations discussing the stability of the method. This significant speeds up the Hybrid Monte Carlo simulation by more than a factor 4 at physical pion mass compared to the usage of the CG solver.

  15. The effects of computer-simulated experiments on high school biology students' problem-solving skills and achievement

    NASA Astrophysics Data System (ADS)

    Carmack, Gay Lynn Dickinson

    2000-10-01

    This two-part quasi-experimental repeated measures study examined whether computer simulated experiments have an effect on the problem solving skills of high school biology students in a school-within-a-school magnet program. Specifically, the study identified episodes in a simulation sequence where problem solving skills improved. In the Fall academic semester, experimental group students (n = 30) were exposed to two simulations: CaseIt! and EVOLVE!. Control group students participated in an internet research project and a paper Hardy-Weinberg activity. In the Spring academic semester, experimental group students were exposed to three simulations: Genetics Construction Kit, CaseIt! and EVOLVE! . Spring control group students participated in a Drosophila lab, an internet research project, and Advanced Placement lab 8. Results indicate that the Fall and Spring experimental groups experienced significant gains in scientific problem solving after the second simulation in the sequence. These gains were independent of the simulation sequence or the amount of time spent on the simulations. These gains were significantly greater than control group scores in the Fall. The Spring control group significantly outscored all other study groups on both pretest measures. Even so, the Spring experimental group problem solving performance caught up to the Spring control group performance after the third simulation. There were no significant differences between control and experimental groups on content achievement. Results indicate that CSE is as effective as traditional laboratories in promoting scientific problem solving and that CSE is a useful tool for improving students' scientific problem solving skills. Moreover, retention of problem solving skills is enhanced by utilizing more than one simulation.

  16. Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads

    NASA Technical Reports Server (NTRS)

    Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)

    2002-01-01

    Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.; McCorkle, D.; Yang, C.

    Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less

  18. Radar image enhancement and simulation as an aid to interpretation and training

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Dellwig, L. F.; Held, D. N.

    1980-01-01

    Greatly increased activity in the field of radar image applications in the coming years demands that techniques of radar image analysis, enhancement, and simulation be developed now. Since the statistical nature of radar imagery differs from that of photographic imagery, one finds that the required digital image processing algorithms (e.g., for improved viewing and feature extraction) differ from those currently existing. This paper addresses these problems and discusses work at the Remote Sensing Laboratory in image simulation and processing, especially for systems comparable to the formerly operational SEASAT synthetic aperture radar.

  19. The Utilization of the Behavioral Sciences in Long Range Forecasting and Policy Planning

    DTIC Science & Technology

    1973-07-30

    73 - December 31, 1973. The report will be divided into six major sections. The first will describe the analysis initiated and completed dur- ing...the half year. Results of special significance will be highlighted. Methodological problems that have arisen during the analysis will be discussed...of this analysis has been the development of a modular computer simula- tion of such operations. The oil simulation module Is then to be used

  20. The Domain-Specific Software Architecture Program

    DTIC Science & Technology

    1992-06-01

    Kang, K.C; Cohen, S.C: Jess, J.A; Novak, W.E; Peterson, A.S. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study. (CMU/SEI-90-TR-21, ADA235785...perspective of a con- trols engineer solving a problem using an iterative process of simulation and analysis . The CMU/SEI-92-SR-9 1 I ~math AnalysislP...for schedulability analysis and Markov processes for the determination of reliability. Software architectures are derived from these formal models. ORA

  1. LES of flow in the street canyon

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimír; Brechler, Josef

    2012-04-01

    Results of computer simulation of flow over a series of street canyons are presented in this paper. The setup is adapted from an experimental study by [4] with two different shapes of buildings. The problem is simulated by an LES model CLMM (Charles University Large Eddy Microscale Model) and results are analysed using proper orthogonal decomposition and spectral analysis. The results in the channel (layout from the experiment) are compared with results with a free top boundary.

  2. Real-Time Wing-Vortex and Pressure Distribution Estimation on Wings Via Displacements and Strains in Unsteady and Transitional Flight Conditions

    DTIC Science & Technology

    2016-09-07

    approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on

  3. Numerical Simulation of Hysteretic Live Load Effect in a Soil-Steel Bridge

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej

    2014-03-01

    The paper presents numerical simulation of hysteretic live load effect in a soil-steel bridge. The effect was originally identified experimentally by Machelski [1], [2]. The truck was crossing the bridge one way and the other in the full-scale test performed. At the same time, displacements and stress in the shell were measured. The major conclusion from the research was that the measured quantities formed hysteretic loops. A numerical simulation of that effect is addressed in the present work. The analysis was performed using Flac finite difference code. The methodology of solving the mechanical problems implemented in Flac enables us to solve the problem concerning a sequence of load and non-linear mechanical behaviour of the structure. The numerical model incorporates linear elastic constitutive relations for the soil backfill, for the steel shell and the sheet piles, being a flexible substructure for the shell. Contact zone between the shell and the soil backfill is assumed to reflect elastic-plastic constitutive model. Maximum shear stress in contact zone is limited by the Coulomb condition. The plastic flow rule is described by dilation angle ψ = 0. The obtained results of numerical analysis are in fair agreement with the experimental evidence. The primary finding from the performed simulation is that the slip in the interface can be considered an explanation of the hysteresis occurrence in the charts of displacement and stress in the shell.

  4. Modelling, simulation and applications of longitudinal train dynamics

    NASA Astrophysics Data System (ADS)

    Cole, Colin; Spiryagin, Maksym; Wu, Qing; Sun, Yan Quan

    2017-10-01

    Significant developments in longitudinal train simulation and an overview of the approaches to train models and modelling vehicle force inputs are firstly presented. The most important modelling task, that of the wagon connection, consisting of energy absorption devices such as draft gears and buffers, draw gear stiffness, coupler slack and structural stiffness is then presented. Detailed attention is given to the modelling approaches for friction wedge damped and polymer draft gears. A significant issue in longitudinal train dynamics is the modelling and calculation of the input forces - the co-dimensional problem. The need to push traction performances higher has led to research and improvement in the accuracy of traction modelling which is discussed. A co-simulation method that combines longitudinal train simulation, locomotive traction control and locomotive vehicle dynamics is presented. The modelling of other forces, braking propulsion resistance, curve drag and grade forces are also discussed. As extensions to conventional longitudinal train dynamics, lateral forces and coupler impacts are examined in regards to interaction with wagon lateral and vertical dynamics. Various applications of longitudinal train dynamics are then presented. As an alternative to the tradition single wagon mass approach to longitudinal train dynamics, an example incorporating fully detailed wagon dynamics is presented for a crash analysis problem. Further applications of starting traction, air braking, distributed power, energy analysis and tippler operation are also presented.

  5. An Analysis Platform for Multiscale Hydrogeologic Modeling with Emphasis on Hybrid Multiscale Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan

    2015-01-01

    One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less

  6. An analysis platform for multiscale hydrogeologic modeling with emphasis on hybrid multiscale methods.

    PubMed

    Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D

    2015-01-01

    One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.

  7. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  8. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    PubMed

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  9. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  10. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  11. Computational Issues Associated with Temporally Deforming Geometries Such as Thrust Vectoring Nozzles

    NASA Technical Reports Server (NTRS)

    Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert

    1996-01-01

    During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.

  12. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  13. Galileo and optical illusion

    NASA Astrophysics Data System (ADS)

    Parker, Gary D.

    1986-03-01

    Galileo's earliest telescopic measurements are of sufficient quality that their detailed analysis yields scientifically interesting and pedagogically useful results. An optical illusion strongly influences Galileo's observations of Jupiter's moons, as published in the Starry Messenger. A simple procedure identifies individual satellites with sufficient reliability to demonstrate that Galileo regularly underestimated satellite brightness and overestimated elongation when a satellite was very close to Jupiter. The probability of underestimation is a monotonically decreasing function of separation angle, both for Galileo and for viewers of a laboratory simulation of the Jupiter ``starfield'' viewed by Galileo. Analysis of Galileo's records and a simple simulation experiment appropriate to undergraduate courses clarify the scientific problems facing Galileo in interpreting his observations.

  14. Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments

    PubMed Central

    Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162

  15. Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. V.; Yerazunis, S. W.

    1973-01-01

    Problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars are reported. Problem areas include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis, terrain modeling and path selection; and chemical analysis of specimens. These tasks are summarized: vehicle model design, mathematical model of vehicle dynamics, experimental vehicle dynamics, obstacle negotiation, electrochemical controls, remote control, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, and chromatograph model evaluation and improvement.

  16. Part 1. The GLOSAS Project.

    ERIC Educational Resources Information Center

    Utsumi, Takeshi; Mogalhaes, Maria Rosa Abreu

    1993-01-01

    Describes accomplishments of the Global Systems Analysis and Simulation (GLOSAS) project from 1973 to the present, including a system for global peace gaming. The capabilities of interactive multimedia to link people across political and geographic boundaries for joint study, debate, research, planetary problem solving, and political action are…

  17. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  18. Convergence and rate analysis of neural networks for sparse approximation.

    PubMed

    Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J

    2012-09-01

    We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.

  19. The QuakeSim Project: Numerical Simulations for Active Tectonic Processes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry

    2004-01-01

    In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.

  20. Dynamic and fluid-structure interaction simulations of bioprosthetic heart valves using parametric design with T-splines and Fung-type material models

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Chen; Kamensky, David; Xu, Fei; Kiendl, Josef; Wang, Chenglong; Wu, Michael C. H.; Mineroff, Joshua; Reali, Alessandro; Bazilevs, Yuri; Sacks, Michael S.

    2015-06-01

    This paper builds on a recently developed immersogeometric fluid-structure interaction (FSI) methodology for bioprosthetic heart valve (BHV) modeling and simulation. It enhances the proposed framework in the areas of geometry design and constitutive modeling. With these enhancements, BHV FSI simulations may be performed with greater levels of automation, robustness and physical realism. In addition, the paper presents a comparison between FSI analysis and standalone structural dynamics simulation driven by prescribed transvalvular pressure, the latter being a more common modeling choice for this class of problems. The FSI computation achieved better physiological realism in predicting the valve leaflet deformation than its standalone structural dynamics counterpart.

  1. Aerodynamic preliminary analysis system. Part 1: Theory. [linearized potential theory

    NASA Technical Reports Server (NTRS)

    Bonner, E.; Clever, W.; Dunn, K.

    1978-01-01

    A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple non-planar surfaces of arbitrary planform and open or closed slender bodies of non-circular contour may be analyzed. Longitudinal and lateral-directional static and rotary derivative solutions may be generated. The analysis was implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.

  2. Dust Dynamics in Protoplanetary Disks: Parallel Computing with PVM

    NASA Astrophysics Data System (ADS)

    de La Fuente Marcos, Carlos; Barge, Pierre; de La Fuente Marcos, Raúl

    2002-03-01

    We describe a parallel version of our high-order-accuracy particle-mesh code for the simulation of collisionless protoplanetary disks. We use this code to carry out a massively parallel, two-dimensional, time-dependent, numerical simulation, which includes dust particles, to study the potential role of large-scale, gaseous vortices in protoplanetary disks. This noncollisional problem is easy to parallelize on message-passing multicomputer architectures. We performed the simulations on a cache-coherent nonuniform memory access Origin 2000 machine, using both the parallel virtual machine (PVM) and message-passing interface (MPI) message-passing libraries. Our performance analysis suggests that, for our problem, PVM is about 25% faster than MPI. Using PVM and MPI made it possible to reduce CPU time and increase code performance. This allows for simulations with a large number of particles (N ~ 105-106) in reasonable CPU times. The performances of our implementation of the pa! rallel code on an Origin 2000 supercomputer are presented and discussed. They exhibit very good speedup behavior and low load unbalancing. Our results confirm that giant gaseous vortices can play a dominant role in giant planet formation.

  3. CAPRI: Using a Geometric Foundation for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2002-01-01

    CAPRI (Computational Analysis Programming Interface) is a software development tool intended to make computerized design, simulation and analysis faster and more efficient. The computational steps traditionally taken for most engineering analysis (Computational Fluid Dynamics (CFD), structural analysis, etc.) are: Surface Generation, usually by employing a Computer Aided Design (CAD) system; Grid Generation, preparing the volume for the simulation; Flow Solver, producing the results at the specified operational point; Post-processing Visualization, interactively attempting to understand the results. It should be noted that the structures problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go IGES files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D) data formats and the upcoming CGNS). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Instead of the serial approach to analysis, CAPRI takes a geometry centric approach. CAPRI is a software building tool-kit that refers to two ideas: (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. The connection to the geometry is made through an Application Programming Interface (API) and not a file system.

  4. A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review

    PubMed Central

    Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha

    2017-01-01

    Objective: To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. Methods: For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. Results: The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert’s evaluation. Conclusion: The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems. PMID:28507994

  5. A coherent Ising machine for 2000-node optimization problems

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  6. The 1981 NASA ASEE Summer Faculty Fellowship Program, volume 2

    NASA Technical Reports Server (NTRS)

    Robertson, N. G.; Huang, C. J.

    1981-01-01

    A collection of papers on miscellaneous subjects in aerospace research is presented. Topics discussed are: (1) Langmuir probe theory and the problem of anisotropic collection; (2) anthropometric program analysis of reach and body movement; (3) analysis of IV characteristics of negatively biased panels in a magnetoplasma; (4) analytic solution to classical two body drag problem; (5) fast variable step size integration algorithm for computer simulations of physiological systems; (6) spectroscopic experimental computer assisted empirical model for the production of energetics of excited oxygen molecules formed by atom recombination shuttle tile surfaces; and (7) capillary priming characteristics of dual passage heat pipe in zero-g.

  7. Command Process Modeling & Risk Analysis

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  8. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    NASA Astrophysics Data System (ADS)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  9. Estimation of absolute solvent and solvation shell entropies via permutation reduction

    NASA Astrophysics Data System (ADS)

    Reinhard, Friedemann; Grubmüller, Helmut

    2007-01-01

    Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.

  10. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.

    2016-10-04

    In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less

  11. Solar Heating And Cooling Of Buildings (SHACOB): Requirements definition and impact analysis-2. Volume 2: Domestic hot water systems

    NASA Astrophysics Data System (ADS)

    Cretcher, C. K.

    1980-11-01

    The various types of solar domestic hot water systems are discussed including their advantages and disadvantages. The problems that occur in hydronic solar heating systems are reviewed with emphasis on domestic hot water applicatons. System problems in retrofitting of residential buildings are also discussed including structural and space constraints for various components and subsystems. System design parameters include various collector sizing methods, collector orientation, storage capacity and heat loss from pipes and tanks. The installation costs are broken down by components and subsystems. The approach used for utility economic impact analysis is reviewed. The simulation is described, and the results of the economic impact analysis are given. A summary assessment is included.

  12. Application of dynamic Monte Carlo technique in proton beam radiotherapy using Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Guan, Fada

    Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.

  13. The Effective-One-Body Approach to the General Relativistic Two Body Problem

    NASA Astrophysics Data System (ADS)

    Damour, Thibault; Nagar, Alessandro

    The two-body problem in General Relativity has been the subject of many analytical investigations. After reviewing some of the methods used to tackle this problem (and, more generally, the N-body problem), we focus on a new, recently introduced approach to the motion and radiation of (comparable mass) binary systems: the Effective One Body (EOB) formalism. We review the basic elements of this formalism, and discuss some of its recent developments. Several recent comparisons between EOB predictions and Numerical Relativity (NR) simulations have shown the aptitude of the EOB formalism to provide accurate descriptions of the dynamics and radiation of various binary systems (comprising black holes or neutron stars) in regimes that are inaccessible to other analytical approaches (such as the last orbits and the merger of comparable mass black holes). In synergy with NR simulations, post-Newtonian (PN) theory and Gravitational Self-Force (GSF) computations, the EOB formalism is likely to provide an efficient way of computing the very many accurate template waveforms that are needed for Gravitational Wave (GW) data analysis purposes.

  14. Characteristics of Reduction Gear in Electric Agricultural Vehicle

    NASA Astrophysics Data System (ADS)

    Choi, W. S.; Pratama, P. S.; Supeno, D.; Jeong, S. W.; Byun, J. Y.; Woo, J. H.; Lee, E. S.; Park, C. S.

    2018-03-01

    In electric agricultural machine a reduction gear is needed to convert the high speed rotation motion generated by DC motor to lower speed rotation motion used by the vehicle. The reduction gear consists of several spur gears. Spur gears are the most easily visualized gears that transmit motion between two parallel shafts and easy to produce. The modelling and simulation of spur gears in DC motor reduction gear is important to predict the actual motion behaviour. A pair of spur gear tooth in action is generally subjected to two types of cyclic stress: contact stress and bending stress. The stress may not attain their maximum values at the same point of contact fatigue. These types of failure can be minimized by analysis of the problem during the design stage and creating proper tooth surface profile with proper manufacturing methods. To improve its life expectation in this study modal and stress analysis of reduction gear is simulated using ANSYS workbench based on finite element method (FEM). The modal analysis was done to understand reduction gear deformation behaviour when vibration occurs. FEM static stress analysis is also simulated on reduction gear to simulate the gear teeth bending stress and contact stress behaviour.

  15. Effects of Simulation With Problem-Based Learning Program on Metacognition, Team Efficacy, and Learning Attitude in Nursing Students: Nursing Care With Increased Intracranial Pressure Patient.

    PubMed

    Lee, Myung-Nam; Nam, Kyung-Dong; Kim, Hyeon-Young

    2017-03-01

    Nursing care for patients with central nervous system problems requires advanced professional knowledge and care skills. Nursing students are more likely to have difficulty in dealing with adult patients who have severe neurological problems in clinical practice. This study investigated the effect on the metacognition, team efficacy, and learning attitude of nursing students after an integrated simulation and problem-based learning program. A real scenario of a patient with increased intracranial pressure was simulated for the students. The results showed that this method was effective in improving the metacognitive ability of the students. Furthermore, we used this comprehensive model of simulation with problem-based learning in order to assess the consequences of student satisfaction with the nursing major, interpersonal relationships, and importance of simulation-based education in relation to the effectiveness of the integrated simulation with problem-based learning. The results can be used to improve the design of clinical practicum and nursing education.

  16. Inchworm movement of two rings switching onto a thread by biased Brownian diffusion represent a three-body problem.

    PubMed

    Benson, Christopher R; Maffeo, Christopher; Fatila, Elisabeth M; Liu, Yun; Sheetz, Edward G; Aksimentiev, Aleksei; Singharoy, Abhishek; Flood, Amar H

    2018-05-07

    The coordinated motion of many individual components underpins the operation of all machines. However, despite generations of experience in engineering, understanding the motion of three or more coupled components remains a challenge, known since the time of Newton as the "three-body problem." Here, we describe, quantify, and simulate a molecular three-body problem of threading two molecular rings onto a linear molecular thread. Specifically, we use voltage-triggered reduction of a tetrazine-based thread to capture two cyanostar macrocycles and form a [3]pseudorotaxane product. As a consequence of the noncovalent coupling between the cyanostar rings, we find the threading occurs by an unexpected and rare inchworm-like motion where one ring follows the other. The mechanism was derived from controls, analysis of cyclic voltammetry (CV) traces, and Brownian dynamics simulations. CVs from two noncovalently interacting rings match that of two covalently linked rings designed to thread via the inchworm pathway, and they deviate considerably from the CV of a macrocycle designed to thread via a stepwise pathway. Time-dependent electrochemistry provides estimates of rate constants for threading. Experimentally derived parameters (energy wells, barriers, diffusion coefficients) helped determine likely pathways of motion with rate-kinetics and Brownian dynamics simulations. Simulations verified intercomponent coupling could be separated into ring-thread interactions for kinetics, and ring-ring interactions for thermodynamics to reduce the three-body problem to a two-body one. Our findings provide a basis for high-throughput design of molecular machinery with multiple components undergoing coupled motion.

  17. Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning

    NASA Astrophysics Data System (ADS)

    Schumacher, André; Haanpää, Harri

    We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with ns2 simulations.

  18. International Symposium on Numerical Methods in Engineering, 5th, Ecole Polytechnique Federale de Lausanne, Switzerland, Sept. 11-15, 1989, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul

    Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.

  19. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  20. Some Computer-Based Developments in Sociology.

    ERIC Educational Resources Information Center

    Heise, David R.; Simmons, Roberta G.

    1985-01-01

    Discusses several ways in which computers are being used in sociology and how they continue to change this discipline. Areas considered include data collection, data analysis, simulations of social processes based on mathematical models, and problem areas (including standardization concerns, training, and the financing of computing facilities).…

  1. Causal Reasoning in Medicine: Analysis of a Protocol.

    ERIC Educational Resources Information Center

    Kuipers, Benjamin; Kassirer, Jerome P.

    1984-01-01

    Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…

  2. Space construction base control system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.

  3. Spacecraft orbit/earth scan derivations, associated APL program, and application to IMP-6

    NASA Technical Reports Server (NTRS)

    Smith, G. A.

    1971-01-01

    The derivation of a time shared, remote site, demand processed computer program is discussed. The computer program analyzes the effects of selected orbit, attitude, and spacecraft parameters on earth sensor detections of earth. For prelaunch analysis, the program may be used to simulate effects in nominal parameters which are used in preparing attitude data processing programs. After launch, comparison of results from a simulation and from satellite data will produce deviations helpful in isolating problems.

  4. Program Manager: The Journal of the Defense Systems Management College. Volume 15, Number 4, July-August 1986.

    DTIC Science & Technology

    1986-08-01

    Architect, troi systems, CAD CAM, and com- functional analysis , synthesis, and National Bureau of Standards, mon engineering data bases will be the trade...Recurrent analysis of a management these s h e m evolutionary chain of data processing problem combining real data and ponents of defense support system...at the Defense first constructed his support simulator Systems Management College, the by assembling appropriate analysis Data Storage and Retrieval

  5. Display analysis with the optimal control model of the human operator. [pilot-vehicle display interface and information processing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1977-01-01

    Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.

  6. Design of FPGA ICA for hyperspectral imaging processing

    NASA Astrophysics Data System (ADS)

    Nordin, Anis; Hsu, Charles C.; Szu, Harold H.

    2001-03-01

    The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.

  7. Multi-objective Analysis for a Sequencing Planning of Mixed-model Assembly Line

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki; Waki, Toshiya; Yoo, Jae Kyu

    Diversified customer demands are raising importance of just-in-time and agile manufacturing much more than before. Accordingly, introduction of mixed-model assembly lines becomes popular to realize the small-lot-multi-kinds production. Since it produces various kinds on the same assembly line, a rational management is of special importance. With this point of view, this study focuses on a sequencing problem of mixed-model assembly line including a paint line as its preceding process. By taking into account the paint line together, reducing work-in-process (WIP) inventory between these heterogeneous lines becomes a major concern of the sequencing problem besides improving production efficiency. Finally, we have formulated the sequencing problem as a bi-objective optimization problem to prevent various line stoppages, and to reduce the volume of WIP inventory simultaneously. Then we have proposed a practical method for the multi-objective analysis. For this purpose, we applied the weighting method to derive the Pareto front. Actually, the resulting problem is solved by a meta-heuristic method like SA (Simulated Annealing). Through numerical experiments, we verified the validity of the proposed approach, and discussed the significance of trade-off analysis between the conflicting objectives.

  8. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  9. Transient rotordynamic analysis for the space-shuttle main engine high-pressure oxygen turbopump

    NASA Technical Reports Server (NTRS)

    Childs, D. W.

    1974-01-01

    A simulation study was conducted to examine the transient rotordynamics of the space shuttle main engine (SSME) high pressure oxygen turbopump (HPOTP) with the objective of identifying, anticipating, and avoiding rotordynamic problem areas. Simulations were performed for steady state operations at emergency power levels and for critical speed transitions. No problems are indicated in steady state operation of the HPOTP emergency power levels, although the results indicated that a rubbing condition will be experienced during critical speed transition at shutdown, particularly involving rotor deceleration rate and imbalance distribution rubbing at the turbine floating-ring seals. The condition is correctable by either reducing the imbalance at the HPOTP hot gas turbine wheels, or by a more rapid deceleration of the rotor through it critical speed.

  10. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  11. Finite-element analysis of dynamic fracture

    NASA Technical Reports Server (NTRS)

    Aberson, J. A.; Anderson, J. M.; King, W. W.

    1976-01-01

    Applications of the finite element method to the two dimensional elastodynamics of cracked structures are presented. Stress intensity factors are computed for two problems involving stationary cracks. The first serves as a vehicle for discussing lumped-mass and consistent-mass characterizations of inertia. In the second problem, the behavior of a photoelastic dynamic tear test specimen is determined for the time prior to crack propagation. Some results of a finite element simulation of rapid crack propagation in an infinite body are discussed.

  12. Automated electric power management and control for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Mellor, Pamela A.; Kish, James A.

    1990-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. It strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. An integrated approach to the power system command and control problem is defined and used to direct technology development in: diagnosis, security monitoring and analysis, battery management, and cooperative problem-solving for resource allocation. The prototype automated power system is developed using simulations and test-beds.

  13. Testing the World with Simulations.

    ERIC Educational Resources Information Center

    Roberts, Nancy

    1983-01-01

    Discusses steps involved in model building and simulation: understanding a problem, building a model, and simulation. Includes a mathematical model (focusing on a problem dealing with influenza) written in the DYNAMO computer language, developed specifically for writing simulation models. (Author/JN)

  14. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  16. IGA-ADS: Isogeometric analysis FEM using ADS solver

    NASA Astrophysics Data System (ADS)

    Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav

    2017-08-01

    In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).

  17. On long-time instabilities in staggered finite difference simulations of the seismic acoustic wave equations on discontinuous grids

    NASA Astrophysics Data System (ADS)

    Gao, Longfei; Ketcheson, David; Keyes, David

    2018-02-01

    We consider the long-time instability issue associated with finite difference simulation of seismic acoustic wave equations on discontinuous grids. This issue is exhibited by a prototype algebraic problem abstracted from practical application settings. Analysis of this algebraic problem leads to better understanding of the cause of the instability and provides guidance for its treatment. Specifically, we use the concept of discrete energy to derive the proper solution transfer operators and design an effective way to damp the unstable solution modes. Our investigation shows that the interpolation operators need to be matched with their companion restriction operators in order to properly couple the coarse and fine grids. Moreover, to provide effective damping, specially designed diffusive terms are introduced to the equations at designated locations and discretized with specially designed schemes. These techniques are applied to simulations in practical settings and are shown to lead to superior results in terms of both stability and accuracy.

  18. An advection-diffusion-reaction size-structured fish population dynamics model combined with a statistical parameter estimation procedure: application to the Indian ocean skipjack tuna fishery.

    PubMed

    Faugeras, Blaise; Maury, Olivier

    2005-10-01

    We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.

  19. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    NASA Astrophysics Data System (ADS)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  20. Fast simulation of packet loss rates in a shared buffer communications switch

    NASA Technical Reports Server (NTRS)

    Chang, Cheng-Shang; Heidelberger, Philip; Shahabuddin, Perwez

    1993-01-01

    This paper describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.

  1. How sleep problems contribute to simulator sickness: Preliminary results from a realistic driving scenario.

    PubMed

    Altena, Ellemarije; Daviaux, Yannick; Sanz-Arigita, Ernesto; Bonhomme, Emilien; de Sevin, Étienne; Micoulaud-Franchi, Jean-Arthur; Bioulac, Stéphanie; Philip, Pierre

    2018-04-17

    Virtual reality and simulation tools enable us to assess daytime functioning in environments that simulate real life as close as possible. Simulator sickness, however, poses a problem in the application of these tools, and has been related to pre-existing health problems. How sleep problems contribute to simulator sickness has not yet been investigated. In the current study, 20 female chronic insomnia patients and 32 female age-matched controls drove in a driving simulator covering realistic city, country and highway scenes. Fifty percent of the insomnia patients as opposed to 12.5% of controls reported excessive simulator sickness leading to experiment withdrawal. In the remaining participants, patients with insomnia showed overall increased levels of oculomotor symptoms even before driving, while nausea symptoms further increased after driving. These results, as well as the realistic simulation paradigm developed, give more insight on how vestibular and oculomotor functions as well as interoceptive functions are affected in insomnia. Importantly, our results have direct implications for both the actual driving experience and the wider context of deploying simulation techniques to mimic real life functioning, in particular in those professions often exposed to sleep problems. © 2018 European Sleep Research Society.

  2. Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.

    ERIC Educational Resources Information Center

    Michael, Joel A.; Rovick, Allen A.

    1986-01-01

    Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)

  3. Educational Technology in Military Training Applications: A Current Assessment.

    ERIC Educational Resources Information Center

    Platt, William A.; Andrews, Dee H.

    This chapter considers the history of instructional development (ID) in the military, with particular emphasis on the U.S. Navy. The ID process used at the Navy's Instructional Program Development Centers is presented, including the process for simulator development. An in-depth analysis of the problems encountered with educational technology…

  4. An Instructional Systems Technology Model for Institutional Change.

    ERIC Educational Resources Information Center

    Dudgeon, Paul J.

    A program based on instructional systems technology was developed at Canadore College as a means of devising the optimal learning experience for each individual student. The systems approach is used to solve educational problems through a process of analysis, synthesis, modeling, and simulation, based on the LOGOS (Language for Optimizing…

  5. Analysis of equi-intensity curves and NU distribution of EAS

    NASA Technical Reports Server (NTRS)

    Tanahashi, G.

    1985-01-01

    The distribution of the number of muons in extensive air showers (EAS) and the equi-intensity curves of EAS are analyzed on the basis of Monte Carlo simulation of various cosmic ray composition and the interaction models. Problems in the two best combined models are discussed.

  6. Computational Methods for Predictive Simulation of Stochastic Turbulence Systems

    DTIC Science & Technology

    2015-11-05

    Science and Engineering, Venice , Italy, May 18-20, 2015, pp. 1261-1272. [21] Yong Li and P.D. Williams Analysis of the RAW Filter in Composite-Tendency...leapfrog scheme, Proceedings of the VI Conference on Computational Methods for Coupled Problems in Science and Engineering, Venice , Italy, May 18-20

  7. Comparative Analysis of Palm and Wearable Computers for Participatory Simulations

    ERIC Educational Resources Information Center

    Klopfer, Eric; Yoon, Susan; Rivas, Luz

    2004-01-01

    Recent educational computer-based technologies have offered promising lines of research that promote social constructivist learning goals, develop skills required to operate in a knowledge-based economy (Roschelle et al. 2000), and enable more authentic science-like problem-solving. In our research programme, we have been interested in combining…

  8. A comparative study of history-based versus vectorized Monte Carlo methods in the GPU/CUDA environment for a simple neutron eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.

    2014-06-01

    For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.

  9. Preliminary Analysis of Perfusionists’ Strategies for Managing Routine and Failure Mode Scenarios in Cardiopulmonary Bypass

    PubMed Central

    Power, Gerald; Miller, Anne

    2007-01-01

    Abstract: Cardiopulmonary bypass (CPB) is a complex task requiring high levels of practitioner expertise. Although some education standards exist, few are based on an analysis of perfusionists’ problem-solving needs. This study shows the efficacy of work domain analysis (WDA) as a framework for analyzing perfusionists’ conceptualization and problem-solving strategies. A WDA model of a CPB circuit was developed. A high-fidelity CPB simulator (Manbit) was used to present routine and oxygenator failure scenarios to six proficient perfusionists. The video-cued recall technique was used to elicit perfusionists’ conceptualization strategies. The resulting recall transcripts were coded using the WDA model and analyzed for associations between task completion times and patterns of conceptualization. The WDA model developed was successful in being able to account for and describe the thought process followed by each participant. It was also shown that, although there was no correlation between experience with CPB and ability to change an oxygenator, there was a link between the between specific thought patterns and the efficiency in undertaking this task. Simulators are widely used in many fields of human endeavor, and in this research, the attempt was made to use WDA to gain insights into the complexities of the human thought process when engaged in the complex task of conducting CPB. The assumption that experience equates with ability is challenged, and rather, it is shown that thought process is a more significant determinant of success when engaged in complex tasks. WDA analysis in combination with a CPB simulator may be used to elucidate successful strategies for completing complex tasks. PMID:17972450

  10. Aeroelastic-Acoustics Simulation of Flight Systems

    NASA Technical Reports Server (NTRS)

    Gupta, kajal K.; Choi, S.; Ibrahim, A.

    2009-01-01

    This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.

  11. Analysis and design of a capsule landing system and surface vehicle control system for Mars exporation

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.

    1972-01-01

    The problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars were investigated. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; navigation, terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks were studied: vehicle model design, mathematical modeling of dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement and transport parameter evaluation.

  12. Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.

    1972-01-01

    Investigation of problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars has been undertaken. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks have been under study: vehicle model design, mathematical modeling of a dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer sybsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement.

  13. Numerical analysis of projectile impact in woven texile structures

    NASA Technical Reports Server (NTRS)

    Roylance, D.

    1977-01-01

    Computer codes were developed for simulating the dynamic fracture and viscoelastic constitutive response due to stress wave interaction and reflections caused by ballistic impact on woven textiles. The method, which was developed for use in the design and analysis of protection devices for personnel armor, has potential for use in studies of rotor blade burst containment at high velocity. Alterations in coding required for burst containment problems are discussed.

  14. A Bayesian Approach to a Multiple-Group Latent Class-Profile Analysis: The Timing of Drinking Onset and Subsequent Drinking Behaviors among U.S. Adolescents

    ERIC Educational Resources Information Center

    Chung, Hwan; Anthony, James C.

    2013-01-01

    This article presents a multiple-group latent class-profile analysis (LCPA) by taking a Bayesian approach in which a Markov chain Monte Carlo simulation is employed to achieve more robust estimates for latent growth patterns. This article describes and addresses a label-switching problem that involves the LCPA likelihood function, which has…

  15. Analysis of polarized-light effects in glass-promoting solutions with applications to cryopreservation and organ banking.

    PubMed

    Solanki, Prem K; Rabin, Yoed

    2018-01-01

    This study presents experimental results and an analysis approach for polarized light effects associated with thermomechanical stress during cooling of glass promoting solutions, with applications to cryopreservation and tissue banking in a process known as vitrification. Polarized light means have been previously integrated into the cryomacroscope-a visualization device to detect physical effects associated with cryopreservation success, such as crystallization, fracture formation, and contamination. The experimental study concerns vitrification in a cuvette, which is a rectangular container. Polarized light modeling in the cuvette is based on subdividing the tridimensional (3D) domain into a series of planar (2D) problems, for which a mathematical solution is available in the literature. The current analysis is based on tracking the accumulated changes in light polarization and magnitude, as it passes through the sequence of planar problems. Results of this study show qualitative agreement in light intensity history and distribution between experimental data and simulated results. The simulated results help explaining differences between 2D and 3D effects in photoelasticity, most notably, the counterintuitive observation that high stress areas may correlate with low light intensity regions based on the particular experimental conditions. Finally, it is suggested that polarized-light analysis must always be accompanied by thermomechanical stress modeling in order to explain 3D effects.

  16. A Multifaceted Mathematical Approach for Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, F.; Anitescu, M.; Bell, J.

    2012-03-07

    Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significantmore » impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.« less

  17. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1997-01-01

    Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.

  18. Analysis of polarized-light effects in glass-promoting solutions with applications to cryopreservation and organ banking

    PubMed Central

    2018-01-01

    This study presents experimental results and an analysis approach for polarized light effects associated with thermomechanical stress during cooling of glass promoting solutions, with applications to cryopreservation and tissue banking in a process known as vitrification. Polarized light means have been previously integrated into the cryomacroscope—a visualization device to detect physical effects associated with cryopreservation success, such as crystallization, fracture formation, and contamination. The experimental study concerns vitrification in a cuvette, which is a rectangular container. Polarized light modeling in the cuvette is based on subdividing the tridimensional (3D) domain into a series of planar (2D) problems, for which a mathematical solution is available in the literature. The current analysis is based on tracking the accumulated changes in light polarization and magnitude, as it passes through the sequence of planar problems. Results of this study show qualitative agreement in light intensity history and distribution between experimental data and simulated results. The simulated results help explaining differences between 2D and 3D effects in photoelasticity, most notably, the counterintuitive observation that high stress areas may correlate with low light intensity regions based on the particular experimental conditions. Finally, it is suggested that polarized-light analysis must always be accompanied by thermomechanical stress modeling in order to explain 3D effects. PMID:29912973

  19. Dynamic and fluid–structure interaction simulations of bioprosthetic heart valves using parametric design with T-splines and Fung-type material models

    PubMed Central

    Kamensky, David; Xu, Fei; Kiendl, Josef; Wang, Chenglong; Wu, Michael C. H.; Mineroff, Joshua; Reali, Alessandro; Bazilevs, Yuri; Sacks, Michael S.

    2015-01-01

    This paper builds on a recently developed immersogeometric fluid–structure interaction (FSI) methodology for bioprosthetic heart valve (BHV) modeling and simulation. It enhances the proposed framework in the areas of geometry design and constitutive modeling. With these enhancements, BHV FSI simulations may be performed with greater levels of automation, robustness and physical realism. In addition, the paper presents a comparison between FSI analysis and standalone structural dynamics simulation driven by prescribed transvalvular pressure, the latter being a more common modeling choice for this class of problems. The FSI computation achieved better physiological realism in predicting the valve leaflet deformation than its standalone structural dynamics counterpart. PMID:26392645

  20. Seafloor identification in sonar imagery via simulations of Helmholtz equations and discrete optimization

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin

    2017-06-01

    We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.

  1. Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations

    NASA Astrophysics Data System (ADS)

    Wyszkowska, Patrycja

    2017-12-01

    The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.

  2. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Peter E; Wang, Weile; Law, Beverly E.

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less

  3. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  4. Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.

    2005-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  5. Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.

    2007-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  6. Enhancements to the TOUGH2 Simulator as Implemented in iTOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan

    iTOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase, multicomponent flow and transport in fractured and porous media [Pruess, 1987, 1991, 2005, 2011; Falta et al., 1995; Pruess et al., 1999, 2002, 2012; Doughty, 2013]. The core of iTOUGH2 contains slightly modified versions of TOUGH2 modules. Most code modifications are editorial and do not affect the simulation results. As a result, standard TOUGH2 input files can be used in iTOUGH2, and identical results are obtained if iTOUGH2 is run in forward mode. However, a number ofmore » modifications have been made as described in this report. They enhance the functionality, flexibilitu, and eas-of-use of the forward simulator. This report complements the reports iTOUGH2 User's Guide, iTOUGH2 Command Referecne, and the collection of tutorial examples in iTOUGH2 Sample Problems.« less

  7. Main directions in the simulation of physical characteristics of the World Ocean and seas

    NASA Astrophysics Data System (ADS)

    Sarkisyan, A. S.

    2016-07-01

    A brief analysis of the oceanographic papers printed in this issue is presented. For convenience of the reader, the paper by K. Bryan, a prominent scientist and expert in modeling the physical characteristics of the ocean, is discussed in detail. The remaining studies are described briefly in several sections: direct prognostic modeling, diagnosis-adaptation, four-dimensional analysis, and operational oceanography. At the end of the study, we separately discuss the problem of the reproduction of coastal intensification of temperature, salinity, density, and currents. We believe that the quality of the simulation results can be best assessed in terms of the intensity of coastal currents. In conclusion, this opinion is justified in detail.

  8. Colossal Tooling Design: 3D Simulation for Ergonomic Analysis

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Thomas, Robert E.; Babai, Majid

    2003-01-01

    The application of high-level 3D simulation software to the design phase of colossal mandrel tooling for composite aerospace fuel tanks was accomplished to discover and resolve safety and human engineering problems. The analyses were conducted to determine safety, ergonomic and human engineering aspects of the disassembly process of the fuel tank composite shell mandrel. Three-dimensional graphics high-level software, incorporating various ergonomic analysis algorithms, was utilized to determine if the process was within safety and health boundaries for the workers carrying out these tasks. In addition, the graphical software was extremely helpful in the identification of material handling equipment and devices for the mandrel tooling assembly/disassembly process.

  9. Memory Circuit Fault Simulator

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.; McClure, Tucker

    2013-01-01

    Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.

  10. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  11. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  12. Self-consistent Simulations and Analysis of the Coupled-Bunch Instability for Arbitrary Multi-Bunch Configurations

    DOE PAGES

    Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor

    2016-02-24

    A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less

  13. WESTPA: An interoperable, highly scalable software package for weighted ensemble simulation and analysis

    PubMed Central

    Zwier, Matthew C.; Adelman, Joshua L.; Kaus, Joseph W.; Pratt, Adam J.; Wong, Kim F.; Rego, Nicholas B.; Suárez, Ernesto; Lettieri, Steven; Wang, David W.; Grabe, Michael; Zuckerman, Daniel M.; Chong, Lillian T.

    2015-01-01

    The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on under-explored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g. atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g. GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g. BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host-guest associations to non-spatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations, storing and analyzing WE simulation data, as well as examples of input and output. PMID:26392815

  14. Development and validation of the Simulation Learning Effectiveness Inventory.

    PubMed

    Chen, Shiah-Lian; Huang, Tsai-Wei; Liao, I-Chen; Liu, Chienchi

    2015-10-01

    To develop and psychometrically test the Simulation Learning Effectiveness Inventory. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students' perception of stimulation learning effectiveness. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010-June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven first-order factor models. Internal consistency was supported by adequate Cronbach's alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students' learning outcomes. © 2015 John Wiley & Sons Ltd.

  15. Simulation of tunneling construction methods of the Cisumdawu toll road

    NASA Astrophysics Data System (ADS)

    Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.

    2017-11-01

    Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.

  16. High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime

    NASA Astrophysics Data System (ADS)

    Turansky, Craig P.

    The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.

  17. [Review on HSPF model for simulation of hydrology and water quality processes].

    PubMed

    Li, Zhao-fu; Liu, Hong-Yu; Li, Yan

    2012-07-01

    Hydrological Simulation Program-FORTRAN (HSPF), written in FORTRAN, is one ol the best semi-distributed hydrology and water quality models, which was first developed based on the Stanford Watershed Model. Many studies on HSPF model application were conducted. It can represent the contributions of sediment, nutrients, pesticides, conservatives and fecal coliforms from agricultural areas, continuously simulate water quantity and quality processes, as well as the effects of climate change and land use change on water quantity and quality. HSPF consists of three basic application components: PERLND (Pervious Land Segment) IMPLND (Impervious Land Segment), and RCHRES (free-flowing reach or mixed reservoirs). In general, HSPF has extensive application in the modeling of hydrology or water quality processes and the analysis of climate change and land use change. However, it has limited use in China. The main problems with HSPF include: (1) some algorithms and procedures still need to revise, (2) due to the high standard for input data, the accuracy of the model is limited by spatial and attribute data, (3) the model is only applicable for the simulation of well-mixed rivers, reservoirs and one-dimensional water bodies, it must be integrated with other models to solve more complex problems. At present, studies on HSPF model development are still undergoing, such as revision of model platform, extension of model function, method development for model calibration, and analysis of parameter sensitivity. With the accumulation of basic data and imorovement of data sharing, the HSPF model will be applied more extensively in China.

  18. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail M.

    2001-01-01

    Three connected sub-projects were conducted under reported project. Partially, these sub-projects are directed to solving the problems conducted by the HU/FM&AL under two other NASA grants. The fundamental idea uniting these projects is to use untraditional 3D corrugated nozzle designs and additional methods for exhaust jet noise reduction without essential thrust lost and even with thrust augmentation. Such additional approaches are: (1) to add some solid, fluid, or gas mass at discrete locations to the main supersonic gas stream to minimize the negative influence of strong shock waves forming in propulsion systems; this mass addition may be accompanied by heat addition to the main stream as a result of the fuel combustion or by cooling of this stream as a result of the liquid mass evaporation and boiling; (2) to use porous or permeable nozzles and additional shells at the nozzle exit for preliminary cooling of exhaust hot jet and pressure compensation for non-design conditions (so-called continuous ejector with small mass flow rate; and (3) to propose and analyze new effective methods fuel injection into flow stream in air-breathing engines. Note that all these problems were formulated based on detailed descriptions of the main experimental facts observed at NASA Glenn Research Center. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of finding theoretical explanations for experimental facts and the creation of the accurate numerical simulation technique and prediction theory for solutions for current problems in propulsion systems solved by NASA and Navy agencies. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analysis for advanced aircraft and rocket engines. The F&AL Team uses analytical methods, numerical simulations, and possible experimental tests at the Hampton University campus. We will present some management activity and theoretical numerical simulation results obtained by the FM&AL Team in the reporting period in accordance with the schedule of the work.

  19. Integrating GIS and ABM to Explore Spatiotemporal Dynamics

    NASA Astrophysics Data System (ADS)

    Sun, M.; Jiang, Y.; Yang, C.

    2013-12-01

    Agent-based modeling as a methodology for the bottom-up exploration with the account of adaptive behavior and heterogeneity of system components can help discover the development and pattern of the complex social and environmental system. However, ABM is a computationally intensive process especially when the number of system components becomes large and the agent-agent/agent-environmental interaction is modeled very complex. Most of traditional ABM frameworks developed based on CPU do not have a satisfying computing capacity. To address the problem and as the emergence of advanced techniques, GPU computing with CUDA can provide powerful parallel structure to enable the complex simulation of spatiotemporal dynamics. In this study, we first develop a GPU-based ABM system. Secondly, in order to visualize the dynamics generated from the movement of agent and the change of agent/environmental attributes during the simulation, we integrate GIS into the ABM system. Advanced geovisualization technologies can be utilized for representing the spatiotemporal change events, such as proper 2D/3D maps with state-of-the-art symbols, space-time cube and multiple layers each of which presents pattern in one time-stamp, etc. Thirdly, visual analytics which include interactive tools (e.g. grouping, filtering, linking, etc.) is included in our ABM-GIS system to help users conduct real-time data exploration during the progress of simulation. Analysis like flow analysis and spatial cluster analysis can be integrated according to the geographical problem we want to explore.

  20. [Investigation and analysis of status in simulation education of anesthesiology of China].

    PubMed

    Wang, Tian-long; Xue, Ji-xiu; Xiao, Wei; Wu, Xin-min

    2010-03-09

    To investigate the status of simulation education of anesthesiology in China. Five hundreds questionnaires were mailed to chairmen of department of anesthesiology in teaching hospitals in 29 provinces and autonomous regions in China. The retrieved questionnaires and data were processed and analyzed with statistics. Sixty one questionnaires were retrieved, and retrieved rate is 12.2%. The result indicated that the theory and knowledge of anesthesiology was adopted for the training of medical students and residents in 2% teaching hospitals, theory and knowledge of anesthesiology combined with problem-based learning discussion in 52% teaching hospitals, theory and knowledge of anesthesiology combined with problem-based learning discussion and simulation training in 46% teaching hospitals. The order of simulation devices possessed was as follows: Basic Life Support (BLS) (79.6%), training model for clinical anesthesia techniques (53.1%) and Advances Life Support (ALS) (51.0%). There were only six teaching hospitals utilized Human Patient Simulator for anesthesia training. The result of evaluation of simulation education showed that 91.2% anesthesiologists recognized it as applicable, 90.1% anesthesiologists recognized it as medical ethic requirement and 86.0% anesthesiologists recognized it as partly close to clinical situation. The degree of cognition of anesthesiologists to simulation education was ordered as follows: manipulation correcting ability (92.6%), procedure controllability (87.0%), training adjustability (76.0%) and patients safety (68.5%). The simulation education of anesthesiology in China is still in the preliminary period. The executive departments of education should enhance supports to the simulation education in both hard ware and in soft ware.

  1. MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.

    PubMed

    Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M

    2012-04-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.

  2. MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data

    PubMed Central

    Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.

    2012-01-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921

  3. Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.

    1971-01-01

    Investigation of problems related to control of a mobile planetary vehicle according to a systematic plan for the exploration of Mars has been undertaken. Problem areas receiving attention include: (1) overall systems analysis; (2) vehicle configuration and dynamics; (3) toroidal wheel design and evaluation; (4) on-board navigation systems; (5) satellite-vehicle navigation systems; (6) obstacle detection systems; (7) terrain sensing, interpretation and modeling; (8) computer simulation of terrain sensor-path selection systems; and (9) chromatographic systems design concept studies. The specific tasks which have been undertaken are defined and the progress which has been achieved during the period July 1, 1971 to December 31, 1971 is summarized.

  4. Automated Loads Analysis System (ATLAS)

    NASA Technical Reports Server (NTRS)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  5. Practical solutions for reducing container ships' waiting times at ports using simulation model

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Abdorreza; Ilati, Gholamreza; Yeganeh, Yones Eftekhari

    2013-12-01

    The main challenge for container ports is the planning required for berthing container ships while docked in port. Growth of containerization is creating problems for ports and container terminals as they reach their capacity limits of various resources which increasingly leads to traffic and port congestion. Good planning and management of container terminal operations reduces waiting time for liner ships. Reducing the waiting time improves the terminal's productivity and decreases the port difficulties. Two important keys to reducing waiting time with berth allocation are determining suitable access channel depths and increasing the number of berths which in this paper are studied and analyzed as practical solutions. Simulation based analysis is the only way to understand how various resources interact with each other and how they are affected in the berthing time of ships. We used the Enterprise Dynamics software to produce simulation models due to the complexity and nature of the problems. We further present case study for berth allocation simulation of the biggest container terminal in Iran and the optimum access channel depth and the number of berths are obtained from simulation results. The results show a significant reduction in the waiting time for container ships and can be useful for major functions in operations and development of container ship terminals.

  6. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  7. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills

    PubMed Central

    Polyak, Stephen T.; von Davier, Alina A.; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses. PMID:29238314

  8. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills.

    PubMed

    Polyak, Stephen T; von Davier, Alina A; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.

  9. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  10. Switching performance of OBS network model under prefetched real traffic

    NASA Astrophysics Data System (ADS)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  13. Acoustic Analysis of a Sandwich Non Metallic Panel for Roofs by FEM and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Nieto, P. J. García; del Coz Díaz, J. J.; Vilán, J. A. Vilán; Rabanal, F. P. Alvarez

    2007-12-01

    In this paper we have studied the acoustic behavior of a sandwich non metallic panel for roofs by the finite element method (FEM). This new field of analysis is the fully coupled solution of fluid flows with structural interactions, commonly referred to as fluid-structure interaction (FSI). It is the natural next step to take in the simulation of mechanical systems. The finite element analysis of acoustic-fluid/structure interactions using potential-based or displacement-based Lagrangian formulations is now well established. The non-linearity is due to the `fluid-structure interaction' (FSI) that governs the problem. In a very considerable range of problems the fluid displacement remains small while interaction is substantial. In this category falls our problem, in which the structural motion influence and react with the generation of pressures in two reverberation rooms. The characteristic of acoustic insulation of the panel is calculated basing on the pressures for different frequencies and points in the transmission rooms. Finally the conclusions reached are shown.

  14. A unique problem of muscle adaptation from weightlessness: The deceleration deficiency

    NASA Technical Reports Server (NTRS)

    Stauber, William T.

    1989-01-01

    Decelerator problems of the knee are emphasized since the lower leg musculature is known to atrophy in response to weightlessness. However, other important decelerator functions are served by the shoulder muscles, in particular the rotator cuff muscles. Problems in these muscles often result in tears and dislocations as seen in baseball pitchers. It is noteworthy that at least one device currently exists that can measure concentric and eccentric muscle loading including a submaximal simulated free weight exercise (i.e., force-controlled) and simultaneously record integrated EMG analysis appropriate for assessment of all muscle functional activities. Studies should be undertaken to provide information as to the performance of maximal and submaximal exercise in space travelers to define potential problems and provide rationale for prevention.

  15. Design and Application of Interactive Simulations in Problem-Solving in University-Level Physics Education

    ERIC Educational Resources Information Center

    Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel

    2016-01-01

    In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving…

  16. A sonification algorithm for developing the off-roads models for driving simulators

    NASA Astrophysics Data System (ADS)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  17. Design and simulation analysis of a novel pressure sensor based on graphene film

    NASA Astrophysics Data System (ADS)

    Nie, M.; Xia, Y. H.; Guo, A. Q.

    2018-02-01

    A novel pressure sensor structure based on graphene film as the sensitive membrane was proposed in this paper, which solved the problem to measure low and minor pressure with high sensitivity. Moreover, the fabrication process was designed which can be compatible with CMOS IC fabrication technology. Finite element analysis has been used to simulate the displacement distribution of the thin movable graphene film of the designed pressure sensor under the different pressures with different dimensions. From the simulation results, the optimized structure has been obtained which can be applied in the low measurement range from 10hPa to 60hPa. The length and thickness of the graphene film could be designed as 100μm and 0.2μm, respectively. The maximum mechanical stress on the edge of the sensitive membrane was 1.84kPa, which was far below the breaking strength of the silicon nitride and graphene film.

  18. Analysis of Tire Tractive Performance on Deformable Terrain by Finite Element-Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Nakashima, Hiroshi; Takatsu, Yuzuru

    The goal of this study is to develop a practical and fast simulation tool for soil-tire interaction analysis, where finite element method (FEM) and discrete element method (DEM) are coupled together, and which can be realized on a desktop PC. We have extended our formerly proposed dynamic FE-DE method (FE-DEM) to include practical soil-tire system interaction, where not only the vertical sinkage of a tire, but also the travel of a driven tire was considered. Numerical simulation by FE-DEM is stable, and the relationships between variables, such as load-sinkage and sinkage-travel distance, and the gross tractive effort and running resistance characteristics, are obtained. Moreover, the simulation result is accurate enough to predict the maximum drawbar pull for a given tire, once the appropriate parameter values are provided. Therefore, the developed FE-DEM program can be applied with sufficient accuracy to interaction problems in soil-tire systems.

  19. Order-crossing removal in Gabor order tracking by independent component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Tan, Kok Kiong

    2009-08-01

    Order-crossing problems in Gabor order tracking (GOT) of rotating machinery often occur when noise due to power-frequency interference, local structure resonance, etc., is prominent in applications. They can render the analysis results and the waveform-reconstruction tasks in GOT inaccurate or even meaningless. An approach is proposed in this paper to address the order-crossing problem by independent component analysis (ICA). With the approach, accurate order analysis results can be obtained and the waveforms of the order components of interest can be reconstructed or extracted from the recorded noisy data series. In addition, the ambiguities (permutation and scaling) of ICA results are also solved with the approach. The approach is amenable to applications in condition monitoring and fault diagnosis of rotating machinery. The evaluation of the approach is presented in detail based on simulations and an experiment on a rotor test rig. The results obtained using the proposed approach are compared with those obtained using the standard GOT. The comparison shows that the presented approach is more effective to solve order-crossing problems in GOT.

  20. Development of U-Mart System with Plural Brands and Plural Markets

    NASA Astrophysics Data System (ADS)

    Akimoto, Yoshihito; Mori, Naoki; Ono, Isao; Nakajima, Yoshihiro; Kita, Hajime; Matsumoto, Keinosuke

    In this paper, we first discuss the notion that artificial market systems should meet the requirements of fidelity, transparency, reproducibility, and traceability. Next, we introduce history of development of the artificial market system named U-Mart system that meet the requirements well, which have been developed by the U-Mart project. We have already developed the U-Mart system called “U-Mart system version 3.0” to solve problems of old U-Mart systems. In version 3.0 system, trading process is modularized and universal market system can be easily introduced.
    However, U-Mart system version 3.0 only simulates the single brand futures market. The simulation of the plural brands and plural markets has been required by lot of users. In this paper, we proposed a novel U-Mart system called “U-Mart system version 4.0” to solve this problem of U-Mart system version 3.0. We improve the server system, machine agents and GUI in order to simulate plural brands and plural markets in U-Mart system version 4.0. The effectiveness of the proposed system is confirmed by statistical analysis of results of spot market simulation with random agents.

  1. Space shuttle main engine controller assembly, phase C-D. [with lagging system design and analysis

    NASA Technical Reports Server (NTRS)

    1973-01-01

    System design and system analysis and simulation are slightly behind schedule, while design verification testing has improved. Input/output circuit design has improved, but digital computer unit (DCU) and mechanical design continue to lag. Part procurement was impacted by delays in printed circuit board, assembly drawing releases. These are the result of problems in generating suitable printed circuit artwork for the very complex and high density multilayer boards.

  2. Comprehensive helicopter analysis: A state of the art review

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1978-01-01

    An assessment of the status of helicopter theory and analysis is presented. The technology level embodied in available design tools (computer programs) is examined, considering the problem areas of performance, loads and vibration, handling qualities and simulation, and aeroelastic stability. The effectiveness of the present analyses is discussed. The characteristics of the technology in the analyses are reviewed, including the aerodynamics technology, induced velocity and wake geometry, dynamics technology, and machine limitations.

  3. A modified Wright-Fisher model that incorporates Ne: A variant of the standard model with increased biological realism and reduced computational complexity.

    PubMed

    Zhao, Lei; Gossmann, Toni I; Waxman, David

    2016-03-21

    The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Numerical simulation of damage evolution for ductile materials and mechanical properties study

    NASA Astrophysics Data System (ADS)

    El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.

    2015-12-01

    This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.

  5. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  6. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  7. Using HPLC-Mass Spectrometry to Teach Proteomics Concepts with Problem-Based Techniques

    ERIC Educational Resources Information Center

    Short, Michael; Short, Anne; Vankempen, Rachel; Seymour, Michael; Burnatowska-Hledin, Maria

    2010-01-01

    Practical instruction of proteomics concepts was provided using high-performance liquid chromatography coupled with a mass selective detection system (HPLC-MS) for the analysis of simulated protein digests. The samples were prepared from selected dipeptides in order to facilitate the mass spectral identification. As part of the prelaboratory…

  8. Secret Snowflake: Analysis of a Holiday Gift Exchange

    ERIC Educational Resources Information Center

    Turton, Roger W.

    2007-01-01

    This article describes several methods from discrete mathematics used to simulate and solve an interesting problem occurring at a holiday gift exchange. What is the probability that two people will select each other's names in a random drawing, and how does this result vary with the total number of participants? (Contains 5 figures.)

  9. Crashdynamics with DYNA3D: Capabilities and research directions

    NASA Technical Reports Server (NTRS)

    Whirley, Robert G.; Engelmann, Bruce E.

    1993-01-01

    The application of the explicit nonlinear finite element analysis code DYNA3D to crashworthiness problems is discussed. Emphasized in the first part of this work are the most important capabilities of an explicit code for crashworthiness analyses. The areas with significant research promise for the computational simulation of crash events are then addressed.

  10. Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria

    ERIC Educational Resources Information Center

    Li, Zhao; Corti, David S.

    2018-01-01

    The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…

  11. Uncertainty analysis in ecological studies: an overview

    Treesearch

    Harbin Li; Jianguo Wu

    2006-01-01

    Large-scale simulation models are essential tools for scientific research and environmental decision-making because they can be used to synthesize knowledge, predict consequences of potential scenarios, and develop optimal solutions (Clark et al. 2001, Berk et al. 2002, Katz 2002). Modeling is often the only means of addressing complex environmental problems that occur...

  12. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2012-01-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…

  13. TEMPEST: A three-dimensional time-dependence computer program for hydrothermal analysis: Volume 1, Numerical methods and input instructions: Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    TEMPEST offers simulation capabilities over a wide range of hydrothermal problems that are definable by input instructions. These capabilities are summarized by categories as follows: modeling capabilities; program control; and I/O control. 10 refs., 22 figs., 2 tabs. (LSP)

  14. Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores

    NASA Astrophysics Data System (ADS)

    Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David

    2017-10-01

    Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D.; McInnes, L. C.; Woodward, C.

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less

  16. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  17. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2006-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  18. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  19. An Analysis of Chronic Personnel Shortages in the B-52 Radar Navigator Career Field

    DTIC Science & Technology

    1987-03-01

    Weapon System Trainer - The new simulators for the B-52 located on some of the B-52 bases. Due to the complexity of the simulators, they have a small ...navigators crosstraining to these are lost to the B-52 career field. 21 ASTRA Every year a small number of radar navigators are chosen to attend one yerc at...this case, though, it turned up a small problem initially. The separation rates were obtained from Headquarters SAC (10), but did not include the number

  20. Non-Newtonian Hele-Shaw Flow and the Saffman-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kondic, L.; Shelley, M.J.; Palffy-Muhoray, P.

    We explore the Saffman-Taylor instability of a gas bubble expanding into a shear thinning liquid in a radial Hele-Shaw cell. Using Darcy{close_quote}s law generalized for non-Newtonian fluids, we perform simulations of the full dynamical problem. The simulations show that shear thinning significantly influences the developing interfacial patterns. Shear thinning can suppress tip splitting, and produce fingers which oscillate during growth and shed side branches. Emergent length scales show reasonable agreement with a general linear stability analysis. {copyright} {ital 1998} {ital The American Physical Society}

  1. Kinetics of the electric double layer formation modelled by the finite difference method

    NASA Astrophysics Data System (ADS)

    Valent, Ivan

    2017-11-01

    Dynamics of the elctric double layer formation in 100 mM NaCl solution for sudden potentail steps of 10 and 20 mV was simulated using the Poisson-Nernst-Planck theory and VLUGR2 solver for partial differential equations. The used approach was verified by comparing the obtained steady-state solution with the available exact solution. The simulations allowed for detailed analysis of the relaxation processes of the individual ions and the electric potential. Some computational aspects of the problem were discussed.

  2. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  3. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  4. Flow-induced vibration and fretting-wear damage in a moisture separator reheater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettigrew, M.J.; Taylor, C.E.; Fisher, N.J.

    1996-12-01

    Tube failures due to excessive flow-induced vibration were experienced in the tube bundles of moisture separator reheaters in a BWR nuclear station. This paper presents the results of a root cause analysis and covers recommendations for continued operation and for replacement tube bundles. The following tasks are discussed: tube failure analysis; flow velocity distribution calculations; flow-induced vibration analysis with particular emphasis on finned-tubes; fretting-wear testing of a tube and tube-support material combination under simulated operating conditions; field measurements of flow-induced vibration; and development of vibration specifications for replacement tube bundles. The effect of transient operating conditions and of other operationalmore » changes such as tube fouling were considered in the analysis. This paper outlines a typical field problem and illustrates the application of flow-induced vibration technology for the solution of a practical problem.« less

  5. Consistency and convergence for numerical radiation conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.

  6. Research on Optimization of Pooling System and Its Application in Drug Supply Chain Based on Big Data Analysis

    PubMed Central

    2017-01-01

    Reform of drug procurement is being extensively implemented and expanded in China, especially in today's big data environment. However, the pattern of supply mode innovation lags behind procurement improvement. Problems in financial strain and supply break frequently occur, which affect the stability of drug supply. Drug Pooling System is proposed and applied in a few pilot cities to resolve these problems. From the perspective of supply chain, this study analyzes the process of setting important parameters and sets out the tasks of involved parties in a pooling system according to the issues identified in the pilot run. The approach is based on big data analysis and simulation using system dynamic theory and modeling of Vensim software to optimize system performance. This study proposes a theoretical framework to resolve problems and attempts to provide a valuable reference for future application of pooling systems. PMID:28293258

  7. The subtle business of model reduction for stochastic chemical kinetics

    NASA Astrophysics Data System (ADS)

    Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.

    2009-02-01

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  8. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    DOE PAGES

    Warren, Michael S.

    2014-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less

  9. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  10. Dynamical simulation of E-ELT segmented primary mirror

    NASA Astrophysics Data System (ADS)

    Sedghi, B.; Muller, M.; Bauvir, B.

    2011-09-01

    The dynamical behavior of the primary mirror (M1) has an important impact on the control of the segments and the performance of the telescope. Control of large segmented mirrors with a large number of actuators and sensors and multiple control loops in real life is a challenging problem. In virtual life, modeling, simulation and analysis of the M1 bears similar difficulties and challenges. In order to capture the dynamics of the segment subunits (high frequency modes) and the telescope back structure (low frequency modes), high order dynamical models with a very large number of inputs and outputs need to be simulated. In this paper, different approaches for dynamical modeling and simulation of the M1 segmented mirror subject to various perturbations, e.g. sensor noise, wind load, vibrations, earthquake are presented.

  11. The subtle business of model reduction for stochastic chemical kinetics.

    PubMed

    Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R

    2009-02-14

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  12. Finite element analysis of steady and transiently moving/rolling nonlinear viscoelastic structure. III - Impact/contact simulations

    NASA Technical Reports Server (NTRS)

    Nakajima, Yukio; Padovan, Joe

    1987-01-01

    In a three-part series of papers, a generalized finite element methodology is formulated to handle traveling load problems involving large deformation fields in structure composed of viscoelastic media. The main thrust of this paper is to develop an overall finite element methodology and associated solution algorithms to handle the transient aspects of moving problems involving contact impact type loading fields. Based on the methodology and algorithms formulated, several numerical experiments are considered. These include the rolling/sliding impact of tires with road obstructions.

  13. Modeling and analysis of pinhole occulter experiment: Initial study phase

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.

    1985-01-01

    The feasibility of using a generic simulation, TREETOPS, to simulate the Pinhole/Occulter Facility (P/OF) to be tested on the space shuttle was demonstrated. The baseline control system was used to determine the pointing performance of the P/OF. The task included modeling the structure as a three body problem (shuttle-instrument pointing system- P/OP) including the flexibility of the 32 meter P/OF boom. Modeling of sensors, actuators, and control algorithms was also required. Detailed mathematical models for the structure, sensors, and actuators are presented, as well as the control algorithm and corresponding design procedure. Closed loop performance using this controller and computer listings for the simulator are also given.

  14. Numerical Simulation on Smoke Spread and Temperature Distribution in a Corn Starch Explosion

    NASA Astrophysics Data System (ADS)

    Lin, CherngShing; Hsu, JuiPei

    2018-01-01

    It is discovered from dust explosion accidents in recent years that deep causes of the accidents lies in insufficient cognition of dust explosion danger, and no understanding on danger and information of the dust explosion. In the study, Fire Dynamics Simulator (FDS) evaluation tool is used aiming at Taiwan Formosa Fun Coast explosion accidents. The calculator is used for rebuilding the explosion situation. The factors affecting casualties under explosion are studied. The injured personnel participating in the party are evaluated according to smoke diffusion and temperature distribution for numerical simulation results. Some problems noted in the fire disaster after actual explosion are proposed, rational site analysis is given, thereby reducing dust explosion risk grade.

  15. Simulation and Analyses of Multi-Body Separation in Launch Vehicle Staging Environment

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Hotchko, Nathaniel J.; Samareh, Jamshid; Covell, Peter F.; Tartabini, Paul V.

    2006-01-01

    The development of methodologies, techniques, and tools for analysis and simulation of multi-body separation is critically needed for successful design and operation of next generation launch vehicles. As a part of this activity, ConSep simulation tool is being developed. ConSep is a generic MATLAB-based front-and-back-end to the commercially available ADAMS. solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the 3-body separation capability in ConSep and its application to the separation of the Shuttle Solid Rocket Boosters (SRBs) from the External Tank (ET) and the Orbiter. The results are compared with STS-1 flight data.

  16. Users guide: The LaRC human-operator-simulator-based pilot model

    NASA Technical Reports Server (NTRS)

    Bogart, E. H.; Waller, M. C.

    1985-01-01

    A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.

  17. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  18. A theoretical comparison of evolutionary algorithms and simulated annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less

  19. Isogeometric analysis and harmonic stator-rotor coupling for simulating electric machines

    NASA Astrophysics Data System (ADS)

    Bontinck, Zeger; Corno, Jacopo; Schöps, Sebastian; De Gersem, Herbert

    2018-06-01

    This work proposes Isogeometric Analysis as an alternative to classical finite elements for simulating electric machines. Through the spline-based Isogeometric discretization it is possible to parametrize the circular arcs exactly, thereby avoiding any geometrical error in the representation of the air gap where a high accuracy is mandatory. To increase the generality of the method, and to allow rotation, the rotor and the stator computational domains are constructed independently as multipatch entities. The two subdomains are then coupled using harmonic basis functions at the interface which gives rise to a saddle-point problem. The properties of Isogeometric Analysis combined with harmonic stator-rotor coupling are presented. The results and performance of the new approach are compared to the ones for a classical finite element method using a permanent magnet synchronous machine as an example.

  20. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  1. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  2. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  3. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  4. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  5. Consistency analysis on laser signal in laser guided weapon simulation

    NASA Astrophysics Data System (ADS)

    Yin, Ruiguang; Zhang, Wenpan; Guo, Hao; Gan, Lin

    2015-10-01

    The hardware-in-the-loop simulation is widely used in laser semi-active guidance weapon experiments, the authenticity of the laser guidance signal is the key problem of reliability. In order to evaluate the consistency of the laser guidance signal, this paper analyzes the angle of sight, laser energy density, laser spot size, atmospheric back scattering, sun radiation and SNR by comparing the different working state between actual condition and hardware-in-the-loop simulation. Based on measured data, mathematical simulation and optical simulation result, laser guidance signal effects on laser seeker are determined. By using Monte Carlo method, the laser guided weapon trajectory and impact point distribution are obtained, the influence of the systematic error are analyzed. In conclusion it is pointed out that the difference between simulation system and actual system has little influence in normal guidance, has great effect on laser jamming. The research is helpful to design and evaluation of laser guided weapon simulation.

  6. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  7. Stray light analysis for the Thomson scattering diagnostic of the ETE Tokamak.

    PubMed

    Berni, L A; Albuquerque, B F C

    2010-12-01

    Thomson scattering is a well-established diagnostic for measuring local electron temperature and density in fusion plasma, but this technique is particularly difficult to implement due to stray light that can easily mask the scattered signal from plasma. To mitigate this problem in the multipoint Thomson scattering system implemented at the ETE (Experimento Tokamak Esférico) a detailed stray light analysis was performed. The diagnostic system was simulated in ZEMAX software and scattering profiles of the mechanical parts were measured in the laboratory in order to have near realistic results. From simulation, it was possible to identify the main points that contribute to the stray signals and changes in the dump were implemented reducing the stray light signals up to 60 times.

  8. Statistical analysis of dimer formation in supersaturated metal vapor based on molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Korenchenko, Anna E.; Vorontsov, Alexander G.; Gelchinski, Boris R.; Sannikov, Grigorii P.

    2018-04-01

    We discuss the problem of dimer formation during the homogeneous nucleation of atomic metal vapor in an inert gas environment. We simulated nucleation with molecular dynamics and carried out the statistical analysis of double- and triple-atomic collisions as the two ways of long-lived diatomic complex formation. Close pair of atoms with lifetime greater than the mean time interval between atom-atom collisions is called a long-lived diatomic complex. We found that double- and triple-atomic collisions gave approximately the same probabilities of long-lived diatomic complex formation, but internal energy of the resulted state was essentially lower in the second case. Some diatomic complexes formed in three-particle collisions are stable enough to be a critical nucleus.

  9. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1996-01-01

    This research program dealt with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in January 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled three-component problem were developed during 1994 and 1995. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor tor parallel versions of ENG10 was developed. During 1995 and 1996 we developed the capability tor the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames. Benchmark results were presented at the 1196 Computational Aeroscience meeting.

  10. Free wake analysis of hover performance using a new influence coefficient method

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Ong, Ching Cho; Ching, Cho Ong

    1990-01-01

    A new approach to the prediction of helicopter rotor performance using a free wake analysis was developed. This new method uses a relaxation process that does not suffer from the convergence problems associated with previous time marching simulations. This wake relaxation procedure was coupled to a vortex-lattice, lifting surface loads analysis to produce a novel, self contained performance prediction code: EHPIC (Evaluation of Helicopter Performance using Influence Coefficients). The major technical features of the EHPIC code are described and a substantial amount of background information on the capabilities and proper operation of the code is supplied. Sample problems were undertaken to demonstrate the robustness and flexibility of the basic approach. Also, a performance correlation study was carried out to establish the breadth of applicability of the code, with very favorable results.

  11. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  12. Using a Web-based simulation as a problem-based learning experience: perceived and actual performance of undergraduate public health students.

    PubMed

    Spinello, Elio F; Fischbach, Ronald

    2008-01-01

    This study investigated the use of a Web-based community health simulation as a problem-based learning (PBL) experience for undergraduate students majoring in public health. The study sought to determine whether students who participated in the online simulation achieved differences in academic and attitudinal outcomes compared with students who participated in a traditional PBL exercise. Using a nonexperimental comparative design, 21 undergraduate students enrolled in a health-behavior course were each randomly assigned to one of four workgroups. Each workgroup was randomly assigned the semester-long simulation project or the traditional PBL exercise. Survey instruments were used to measure students' attitudes toward the course, their perceptions of the learning community, and perceptions of their own cognitive learning. Content analysis of final essay exams and group reports was used to identify differences in academic outcomes and students' level of conceptual understanding of health-behavior theory. Findings indicated that students participating in the simulation produced higher mean final exam scores compared with students participating in the traditional PBL (p=0.03). Students in the simulation group also outperformed students in the traditional group with respect to their understanding of health-behavior theory (p=0.04). Students in the simulation group, however, rated their own level of cognitive learning lower than did students in the traditional group (p=0.03). By bridging time and distance constraints of the traditional classroom setting, an online simulation may be an effective PBL approach for public health students. Recommendations include further research using a larger sample to explore students' perceptions of learning when participating in simulated real-world activities. Additional research focusing on possible differences between actual and perceived learning relative to PBL methods and student workgroup dynamics is also recommended.

  13. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  14. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  15. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Astrobiology and the Chemistry of the Early Solar System

    NASA Technical Reports Server (NTRS)

    Cook, Jamie Elsila

    2011-01-01

    The field of astrochemistry investigates the origin of the chemicals necessary for the formation of life. Astrochemists use remote observations, laboratory simulations, and analysis of extraterrestrial samples to understand the inventory of pre biotic chemicals present on the early Earth. Among the problems investigated by astrochemists is the origin of homo chirality in terrestrial life. Analysis of meteorites shows that they may have delivered an excess of L-amino acids to the Earth's surface, perhaps leading to homochirality.

  17. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  18. Probabilistic structural analysis methods and applications

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.

    1988-01-01

    An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.

  19. Analysis of the Dielectric constant of saline-alkali soils and the effect on radar backscattering coefficient: a case study of soda alkaline saline soils in Western Jilin Province using RADARSAT-2 data.

    PubMed

    Li, Yang-yang; Zhao, Kai; Ren, Jian-hua; Ding, Yan-ling; Wu, Li-li

    2014-01-01

    Soil salinity is a global problem, especially in developing countries, which affects the environment and productivity of agriculture areas. Salt has a significant effect on the complex dielectric constant of wet soil. However, there is no suitable model to describe the variation in the backscattering coefficient due to changes in soil salinity content. The purpose of this paper is to use backscattering models to understand behaviors of the backscattering coefficient in saline soils based on the analysis of its dielectric constant. The effects of moisture and salinity on the dielectric constant by combined Dobson mixing model and seawater dielectric constant model are analyzed, and the backscattering coefficient is then simulated using the AIEM. Simultaneously, laboratory measurements were performed on ground samples. The frequency effect of the laboratory results was not the same as the simulated results. The frequency dependence of the ionic conductivity of an electrolyte solution is influenced by the ion's components. Finally, the simulated backscattering coefficients measured from the dielectric constant with the AIEM were analyzed using the extracted backscattering coefficient from the RADARSAT-2 image. The results show that RADARSAT-2 is potentially able to measure soil salinity; however, the mixed pixel problem needs to be more thoroughly considered.

  20. A Network Thermodynamic Approach to Compartmental Analysis

    PubMed Central

    Mikulecky, D. C.; Huf, E. G.; Thomas, S. R.

    1979-01-01

    We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc. PMID:262387

  1. Network thermodynamic approach compartmental analysis. Na+ transients in frog skin.

    PubMed

    Mikulecky, D C; Huf, E G; Thomas, S R

    1979-01-01

    We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc.

  2. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  3. Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.

    1990-01-01

    A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.

  4. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  5. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  6. Developing Digital Dashboard Management for Learning System Dynamic Cooperative Simulation Behavior of Indonesia. (Study on Cooperative Information Organization in the Ministry of Cooperatives and SME)

    NASA Astrophysics Data System (ADS)

    Eni, Yuli; Aryanto, Rudy

    2014-03-01

    There are problems being experienced by the Ministry of cooperatives and SME (Small and Medium Enterprise) including the length of time in the decision by the Government to establish a policy that should be taken for local cooperatives across the province of Indonesia. The decision-making process is still analyzed manually, so that sometimes the decisions taken are also less appropriate, effective and efficient. The second problem is the lack of monitoring data cooperative process province that is too much, making it difficult for the analysis of dynamic information to be useful. Therefore the authors want to fix the system that runs by using digital dashboard management system supported by the modeling of system dynamics. In addition, the author also did the design of a system that can support the system. Design of this system is aimed to ease the experts, head, and the government to decide (DSS - Decision Support System) accurately effectively and efficiently, because in the system are raised alternative simulation in a description of the decision to be taken and the result from the decision. The system is expected to be designed dan simulated can ease and expedite the decision making. The design of dynamic digital dashboard management conducted by method of OOAD (Objects Oriented Analysis and Design) complete with UML notation.

  7. Algorithms for radiative transfer simulations for aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko

    2012-11-01

    Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.

  8. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    PubMed

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  9. Simulation of hypersonic scramjet exhaust. [pressure distribution on afterbody/nozzle sections of vehicle

    NASA Technical Reports Server (NTRS)

    Oman, R. A.; Foreman, K. M.; Leng, J.; Hopkins, H. B.

    1975-01-01

    A plan and some preliminary analysis for the accurate simulation of pressure distributions on the afterbody/nozzle portions of a hypersonic scramjet vehicle are described. The objectives fulfilled were to establish the standards of similitude for a hydrogen/air scramjet exhaust interacting with a vehicle afterbody, determine an experimental technique for validation of the procedures that will be used in conventional wind tunnel facilities, suggest a program of experiments for proof of the concept, and explore any unresolved problems in the proposed simulation procedures. It is shown that true enthalpy, Reynolds number, and nearly exact chemistry can be provided in the exhaust flow for the flight regime from Mach 4 to 10 by a detonation tube simulation. A detailed discussion of the required similarity parameters leads to the conclusion that substitute gases can be used as the simulated exhaust gas in a wind tunnel to achieve the correct interaction forces and moments.

  10. SNM-DAT: Simulation of a heterogeneous network for nuclear border security

    NASA Astrophysics Data System (ADS)

    Nemzek, R.; Kenyon, G.; Koehler, A.; Lee, D. M.; Priedhorsky, W.; Raby, E. Y.

    2007-08-01

    We approach the problem of detecting Special Nuclear Material (SNM) smuggling across open borders by modeling a heterogeneous sensor network using an agent-based simulation. Our simulation SNM Data Analysis Tool (SNM-DAT) combines fixed seismic, metal, and radiation detectors with a mobile gamma spectrometer. Decision making within the simulation determines threat levels by combined signatures. The spectrometer is a limited-availability asset, and is only deployed for substantial threats. "Crossers" can be benign or carrying shielded SNM. Signatures and sensors are physics based, allowing us to model realistic sensor networks. The heterogeneous network provides great gains in detection efficiency compared to a radiation-only system. We can improve the simulation through better sensor and terrain models, additional signatures, and crossers that mimic actual trans-border traffic. We expect further gains in our ability to design sensor networks as we learn the emergent properties of heterogeneous detection, and potential adversary responses.

  11. Discrete time modeling and stability analysis of TCP Vegas

    NASA Astrophysics Data System (ADS)

    You, Byungyong; Koo, Kyungmo; Lee, Jin S.

    2007-12-01

    This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.

  12. A 3-D turbulent flow analysis using finite elements with k-ɛ model

    NASA Astrophysics Data System (ADS)

    Okuda, H.; Yagawa, G.; Eguchi, Y.

    1989-03-01

    This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.

  13. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  14. The influence of air traffic control message length and timing on pilot communication

    NASA Technical Reports Server (NTRS)

    Morrow, Daniel; Rodvold, Michelle

    1993-01-01

    The present paper outlines an approach to air traffic control (ATC) communication that is based on theories of dialogue organization and describes several steps or phases in routine controller-pilot communication. The introduction also describes several kinds of communication problems that often disrupt these steps, as well as how these problems may be caused by factors related to ATC messages, the communication medium (radio vs. data link) and task workload. Next, a part-task simulation study is described. This study focused on how problems in radio communication are related to message factors. More specifically, we examined if pilots are more likely to misunderstanding longer ATC messages. A more general goal of the study is to show that communication analysis can help trace where problem occur and why.

  15. Inverse transport problems in quantitative PAT for molecular imaging

    NASA Astrophysics Data System (ADS)

    Ren, Kui; Zhang, Rongting; Zhong, Yimin

    2015-12-01

    Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.

  16. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  17. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  18. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  19. Free-Surface and Contact Line Motion of Liquid in Microgravity

    NASA Technical Reports Server (NTRS)

    Schwartz, Leonard W.

    1996-01-01

    This project involves fundamental studies of the role of nonlinearity in determining the motion of liquid masses under the principal influences of surface tension, viscosity and inertia. Issues to be explored are relevant to aspects of terrestrial processes, as well as being immediately applicable to fluid management in a low-gravity environment. Specific issues include: (1) the mechanic's of liquid masses in large-amplitude motions, (2) the influence of bounding surfaces on the motion, and (3) the ability of such surfaces to control liquid motion by wetting forces, especially when they are augmented by various surface treatments. Mathematical techniques include asymptotic analysis of the governing equations, for problem simplification, and numerical simulation, using both boundary-element and finite-difference methods. The flow problem is divided into an 'outer' or inviscid potential-flow region and one or more inner, or viscous dominated, regions. Relevant to one inner region, the vicinity of the contact line, we discuss time-dependent simulation of slow droplet motion, on a surface of variable wettability, using the lubrication approximation. The simulation uses a disjoining pressure model and reproduces realistic wetting-dewetting behavior.

  20. Analysis of dispatching rules in a stochastic dynamic job shop manufacturing system with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Sharma, Pankaj; Jain, Ajai

    2014-12-01

    Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.

Top